Project Part 4

  • Importing Necessary modules
In [1]:
import os
import numpy as np
import pandas as pd
import cv2
import matplotlib.pyplot as plt
%matplotlib inline

import warnings
warnings.filterwarnings('ignore')

from sklearn.model_selection import train_test_split
import itertools

from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau
from keras.models import Sequential, Model
from keras.layers import Dense, Activation, Flatten, Dropout, concatenate, Input, Conv2D, MaxPooling2D
from keras.optimizers import Adam, Adadelta
from keras.layers.advanced_activations import LeakyReLU
from keras.utils.np_utils import to_categorical

import glob
Using TensorFlow backend.
In [2]:
from sklearn.metrics import classification_report
In [3]:
!pip install tflearn
Requirement already satisfied: tflearn in c:\programdata\anaconda3\lib\site-packages (0.5.0)
Requirement already satisfied: six in c:\programdata\anaconda3\lib\site-packages (from tflearn) (1.15.0)
Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (from tflearn) (1.18.2)
Requirement already satisfied: Pillow in c:\programdata\anaconda3\lib\site-packages (from tflearn) (7.2.0)

import tflearn.datasets.oxflower17 as oxflower17

  • Initializing Image height/width/size/epochs as we are using it in multiple places
In [4]:
img_height=100
img_width=100
batch_size=32
nb_epochs=25
In [5]:
#Using data present in local folder

Loading Data

Exploring the folders containing data

In [7]:
def get_immediate_subdirectories(a_dir):
    return [name for name in os.listdir(a_dir)
            if os.path.isdir(os.path.join(a_dir, name))]
In [8]:
specPath='F:\\GreatLearning\AI\\ComputerVision\\Project\\Flowers-Classification\\17flowers-train\\jpg'
specPathTest='F:\\GreatLearning\\AI\\ComputerVision\\Project\\Flowers-Classification\\Test\\'
cat_Folder_list=get_immediate_subdirectories(specPath)

List of species in the train folders

In [9]:
Flower_species=cat_Folder_list
print('List of Flower species: ', Flower_species)
List of Flower species:  ['0', '1', '10', '11', '12', '13', '14', '15', '16', '2', '3', '4', '5', '6', '7', '8', '9']
  • we have 17 species of flowers tagged with numbers from 0 to 16

Nummber of images in the train sub folders along with categories

In [10]:
#No. of images under each plant species foler for train
for img in Flower_species:
    print('{}   -->   {} training images'.format(img, len(os.listdir(os.path.join(specPath, img)))))
0   -->   80 training images
1   -->   80 training images
10   -->   80 training images
11   -->   88 training images
12   -->   82 training images
13   -->   85 training images
14   -->   80 training images
15   -->   80 training images
16   -->   80 training images
2   -->   80 training images
3   -->   80 training images
4   -->   80 training images
5   -->   80 training images
6   -->   80 training images
7   -->   80 training images
8   -->   80 training images
9   -->   80 training images
  • There are 17 species are available for training and each flower species contains 80-85 images
In [11]:
import glob
rootdir='F:\\GreatLearning\\AI\\ComputerVision\\Project\\'
rootJPG=os.path.join(os.path.join(rootdir,'Flowers-Classification\\17flowers-train'),'jpg')
os.chdir(os.path.join(specPath,Flower_species[0])) #changing current directory to open file easily
count=0
imgList=[]
for file in glob.glob("*.jpg"):
    print(file)
    count+=1
    imgList.append(file)
    if (count==10):
        break
image_0001.jpg
image_0002.jpg
image_0003.jpg
image_0004.jpg
image_0005.jpg
image_0006.jpg
image_0007.jpg
image_0008.jpg
image_0009.jpg
image_0010.jpg

Data visualisation:

In [12]:
from PIL import Image
Image.open("image_0002.jpg")
Out[12]:
In [13]:
import imageio
import matplotlib.pyplot as plt
%matplotlib inline

#capture basic details of images
def imgBasics(path,imgName):
    img1= os.path.join(path, imgName)
    pic = imageio.imread(img1)
    plt.figure(figsize = (5,5))
    plt.imshow(pic)

    #Basic properties of image
    print('Type of the image : ' , type(pic)) 
    print('Shape of the image : {}'.format(pic.shape)) 
    print('Image Hight {}'.format(pic.shape[0])) 
    print('Image Width {}'.format(pic.shape[1])) 
    print('Dimension of Image {}'.format(pic.ndim))
    print('Image size {}'.format(pic.size)) 
    print('Maximum RGB value in this image {}'.format(pic.max())) 
    print('Minimum RGB value in this image {}'.format(pic.min()))
    print('Value of only R channel {}'.format(pic[ 100, 50, 0])) 
    print('Value of only G channel {}'.format(pic[ 100, 50, 1])) 
    print('Value of only B channel {}'.format(pic[ 100, 50, 2]))
In [14]:
firstSpecies=os.path.join(specPath,Flower_species[0])
imgBasics(firstSpecies,imgList[0])
Type of the image :  <class 'imageio.core.util.Array'>
Shape of the image : (500, 689, 3)
Image Hight 500
Image Width 689
Dimension of Image 3
Image size 1033500
Maximum RGB value in this image 255
Minimum RGB value in this image 0
Value of only R channel 81
Value of only G channel 76
Value of only B channel 73
In [15]:
imgBasics(firstSpecies,imgList[1])
Type of the image :  <class 'imageio.core.util.Array'>
Shape of the image : (500, 666, 3)
Image Hight 500
Image Width 666
Dimension of Image 3
Image size 999000
Maximum RGB value in this image 255
Minimum RGB value in this image 0
Value of only R channel 50
Value of only G channel 43
Value of only B channel 51
In [16]:
imgBasics(firstSpecies,imgList[9])
Type of the image :  <class 'imageio.core.util.Array'>
Shape of the image : (500, 536, 3)
Image Hight 500
Image Width 536
Dimension of Image 3
Image size 804000
Maximum RGB value in this image 255
Minimum RGB value in this image 0
Value of only R channel 139
Value of only G channel 142
Value of only B channel 133
In [17]:
imgBasics(firstSpecies,imgList[5])
Type of the image :  <class 'imageio.core.util.Array'>
Shape of the image : (500, 555, 3)
Image Hight 500
Image Width 555
Dimension of Image 3
Image size 832500
Maximum RGB value in this image 252
Minimum RGB value in this image 0
Value of only R channel 24
Value of only G channel 34
Value of only B channel 35
  • From the above observed images, each image has same height but diff width
  • We can see the color composition are diff
  • images were taken at different light conditions
  • images were taken at different angles as well
  • not all images are center weighted, few have multiple images as well
  • For color concentratin as well, we can notice the distribution is spreaded on R,G & B.
  • Lot of baackground information is also present including the leaves or trees or presence of other objects as well
In [30]:
rootdir='F:\\GreatLearning\\AI\\ComputerVision\\Project\\'
os.chdir(rootdir) #resetting back to original directory
In [ ]:
 

Loading data from all folders along with mapped categories

In [13]:
import glob
images_per_class = {}
for class_folder_name in os.listdir(specPath):
    class_folder_path = os.path.join(specPath, class_folder_name)
    if os.path.isdir(class_folder_path):
        class_label = class_folder_name
        images_per_class[class_label] = []
        for image_path in glob.glob(os.path.join(class_folder_path, "*.jpg")):
            img = cv2.imread(image_path)
            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
            #image_bgr = cv2.imread(image_path, cv2.IMREAD_COLOR)
            images_per_class[class_label].append(img)
In [12]:
for key,value in images_per_class.items():
    print("{0} -> {1}".format(key, len(value)))
0 -> 0
Plot images

Plot images so we can see what the input looks like

In [20]:
#Function for getting images class wise
def plot_for_class(label):
    nb_rows = 3
    nb_cols = 3
    fig, axs = plt.subplots(nb_rows, nb_cols, figsize=(6, 6))

    n = 0
    for i in range(0, nb_rows):
        for j in range(0, nb_cols):
            axs[i, j].xaxis.set_ticklabels([])
            axs[i, j].yaxis.set_ticklabels([])
            axs[i, j].imshow(images_per_class[label][n])
            n += 1 
In [21]:
plot_for_class("0")
In [22]:
plot_for_class("1")
In [23]:
plot_for_class("6")
In [24]:
len(images_per_class['0'])
Out[24]:
80
  • Getting radom images from all classes
In [25]:
import random as rn
Z=images_per_class

fig,ax=plt.subplots(5,5)
fig.set_size_inches(15,15)
for i in range(5):
    for j in range (5):
        l=rn.randint(0,len(Z)-1)
        #print(l)
        k=rn.randint(0,len(Z[str(l)])-1)
        #print(k)
        ax[i,j].imshow(Z[str(l)][k])
        ax[i,j].set_title('Flower: '+str(l))
        
plt.tight_layout()
In [ ]:
 
  • From above groups of images we can observe there is huge variation in data set
  • There is a wide mix of similar feature among classes as well, like class 9,11,12 look a lot similar
  • If we notice Flowe class 2 in above image, flowers are very less compared to the bigger leaves it contains
  • Many of the images are center weighted and are good for details, but many of them has huge variations or very less presence of the image compared to overall image.
  • Color distribution is very dynamic among classes.
In [ ]:
 

Apply different filters

In [30]:
image = images_per_class["1"][8]
plt.imshow(image, cmap='gray')
# 3x3 sobel filter for horizontal edge detection
sobel_y = np.array([[ -1, -2, -1], 
                   [ 0, 0, 0], 
                   [ 1, 2, 1]])
# vertical edge detection
sobel_x = np.array([[-1, 0, 1],
                   [-2, 0, 2],
                   [-1, 0, 1]])
# filter the image using filter2D(grayscale image, bit-depth, kernel)  
filtered_image1 = cv2.filter2D(image, -1, sobel_x)
filtered_image2 = cv2.filter2D(image, -1, sobel_y)
f, ax = plt.subplots(1, 2, figsize=(15, 4))
ax[0].set_title('horizontal edge detection')
ax[0].imshow(filtered_image1, cmap='gray')
ax[1].set_title('vertical edge detection')
ax[1].imshow(filtered_image2, cmap='gray')
Out[30]:
<matplotlib.image.AxesImage at 0x18e83afb0c8>
In [31]:
image = images_per_class["0"][8]
plt.imshow(image, cmap='gray')
# 3x3 sobel filter for horizontal edge detection
sobel_y = np.array([[ -1, -2, -1], 
                   [ 0, 0, 0], 
                   [ 1, 2, 1]])
# vertical edge detection
sobel_x = np.array([[-1, 0, 1],
                   [-2, 0, 2],
                   [-1, 0, 1]])
# filter the image using filter2D(grayscale image, bit-depth, kernel)  
filtered_image1 = cv2.filter2D(image, -1, sobel_x)
filtered_image2 = cv2.filter2D(image, -1, sobel_y)
f, ax = plt.subplots(1, 2, figsize=(15, 4))
ax[0].set_title('horizontal edge detection')
ax[0].imshow(filtered_image1, cmap='gray')
ax[1].set_title('vertical edge detection')
ax[1].imshow(filtered_image2, cmap='gray')
Out[31]:
<matplotlib.image.AxesImage at 0x18eadad4548>
In [32]:
image = images_per_class["0"][8]

# 3x3 sobel filter for horizontal edge detection
sobel_y = np.array([[ 0, -1, 0], 
                   [ -1, 5, -1], 
                   [ 0, -1, 0]])
# vertical edge detection
sobel_x = np.array([[ 0, -1, 0], 
                   [ -1, 5, -1], 
                   [ 0, -1, 0]])
# filter the image using filter2D(grayscale image, bit-depth, kernel)  
filtered_image1 = cv2.filter2D(image, -1, sobel_x,sobel_y)
plt.imshow(filtered_image1)
Out[32]:
<matplotlib.image.AxesImage at 0x18e82b4e108>
In [26]:
def create_mask_for_plant(image):
    image_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

    sensitivity = 35
    lower_hsv = np.array([60 - sensitivity, 100, 50])
    upper_hsv = np.array([60 + sensitivity, 255, 255])

    mask = cv2.inRange(image_hsv, lower_hsv, upper_hsv)
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11,11))
    mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
    
    return mask

def segment_plant(image):
    mask = create_mask_for_plant(image)
    output = cv2.bitwise_and(image, image, mask = mask)
    return output

def sharpen_image(image):
    image_blurred = cv2.GaussianBlur(image, (0, 0), 3)
    image_sharp = cv2.addWeighted(image, 1.5, image_blurred, -0.5, 0)
    return image_sharp
In [ ]:
 
In [27]:
image = images_per_class["9"][65]

image_mask = create_mask_for_plant(image)
image_segmented = segment_plant(image)
image_sharpen = sharpen_image(image_segmented)

fig, axs = plt.subplots(1, 4, figsize=(20, 20))
axs[0].imshow(image)
axs[1].imshow(image_mask)
axs[2].imshow(image_segmented)
axs[3].imshow(image_sharpen)
Out[27]:
<matplotlib.image.AxesImage at 0x18e812e6fc8>
In [28]:
image = images_per_class["0"][65]

image_mask = create_mask_for_plant(image)
image_segmented = segment_plant(image)
image_sharpen = sharpen_image(image_segmented)

fig, axs = plt.subplots(1, 4, figsize=(20, 20))
axs[0].imshow(image)
axs[1].imshow(image_mask)
axs[2].imshow(image_segmented)
axs[3].imshow(image_sharpen)
Out[28]:
<matplotlib.image.AxesImage at 0x18e819a5348>
In [29]:
image = images_per_class["1"][8]

image_mask = create_mask_for_plant(image)
image_segmented = segment_plant(image)
image_sharpen = sharpen_image(image_segmented)

fig, axs = plt.subplots(1, 4, figsize=(20, 20))
axs[0].imshow(image)
axs[1].imshow(image_mask)
axs[2].imshow(image_segmented)
axs[3].imshow(image_sharpen)
Out[29]:
<matplotlib.image.AxesImage at 0x18e839d8f88>
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
  • Importing data
  • Preprocessing the image for using it in models
  • For Supervised Models like KNN,
    • importing and resizing it to 32x32 with RGB to maintain its colours and pattern (COLOR_BGR2RGB)
    • For Interpolation using cv.INTER_AREA
    • dividing the values with 255 to normalize it and make it float
    • capturing the folder names as categories
In [14]:
#we can not directly use the image, we have to process the image.

from pathlib import Path
from skimage.io import imread
from keras.preprocessing import image
import cv2 as cv
def load_image_files(container_path):
    image_dir = Path(container_path)
    folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
    categories = [fo.name for fo in folders]

    images = []
    flat_data = []
    target = []
    count = 0
    train_img = []
    label_img = []
    for i, direc in enumerate(folders):
        for file in direc.iterdir():
            count += 1
            img = imread(file)
            #img = cv.cvtColor(img, cv.COLOR_BGR2RGB)
            img_pred = cv.resize(img, (img_height, img_width), interpolation=cv.INTER_AREA)
            img_pred = image.img_to_array(img_pred)
            img_pred = img_pred / 255
            train_img.append(img_pred)
            label_img.append(categories[i])
            
    X = np.array(train_img)
    y = np.array(label_img)
    return X,y

#Using the Keras pre-processing library the image is converted to an array and then normalised.
In [15]:
X = []
y = []
X,y = load_image_files(specPath)
In [ ]:
 
Exploring shape of imported data
In [16]:
X.shape
Out[16]:
(1375, 100, 100, 3)
In [17]:
y.shape
Out[17]:
(1375,)

Exploring images captured

In [18]:
y[0]
Out[18]:
'0'
In [19]:
plt.imshow(X[0])
plt.show()
In [20]:
plt.imshow(X[1],cmap='gist_earth')
plt.show()
In [21]:
fig=plt.figure(figsize=(15,15))

for i in range(1,101):
  img=X[i]
  fig.add_subplot(10,10,i)
  plt.imshow(img,cmap='gray')

plt.show()
print('Label: ', y[1:101])
Label:  ['0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0'
 '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0'
 '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0'
 '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0' '0'
 '0' '0' '0' '0' '0' '0' '0' '1' '1' '1' '1' '1' '1' '1' '1' '1' '1' '1'
 '1' '1' '1' '1' '1' '1' '1' '1' '1' '1']
In [22]:
fig=plt.figure(figsize=(15,15))

for i in range(1,101):
  img=X[1000+i]
  fig.add_subplot(10,10,i)
  plt.imshow(img)

plt.show()
print('Label: ', y[1001:1101])
Label:  ['5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5'
 '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5'
 '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5' '5'
 '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6'
 '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6' '6'
 '6' '6' '6' '6' '6' '6' '6' '6' '6' '6']

Image analysis

  • Images are not simple
  • Contains foreground and background details with multiple objects as noise in it like trees, big leaves, bees etc
  • Huge variations of data present
  • Images are not focused as well
  • Actual flowers are covering very less pixels compared to background and noise. This will lead to high class imbalance for target object and noise.
  • Simple supervised models will have hard time filtering the actual plants with soil and stones as they see the whole picture as a single input and are not splitting foreground from background.

Creating data sets for training and testing

  • Splitting Whole data set to Train, Val and Test with 80%, 10%, 10% respectively
In [23]:
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.2)
In [24]:
X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, random_state=42, test_size=0.5)
In [25]:
#View data set shape
print("X_train: "+str(X_train.shape))
print("X_test: "+str(X_test.shape))
print("X_val: "+str(X_val.shape))
print("y_train: "+str(y_train.shape))
print("y_test: "+str(y_test.shape))
print("y_val: "+str(y_val.shape))
X_train: (1100, 100, 100, 3)
X_test: (138, 100, 100, 3)
X_val: (137, 100, 100, 3)
y_train: (1100,)
y_test: (138,)
y_val: (137,)
In [ ]:
 
In [ ]:
 
In [26]:
#View Raw data in train set
X_train[0]
Out[26]:
array([[[0.01568628, 0.15294118, 0.01960784],
        [0.03921569, 0.14117648, 0.03137255],
        [0.07843138, 0.14509805, 0.08235294],
        ...,
        [0.07843138, 0.21176471, 0.09803922],
        [0.06666667, 0.15686275, 0.06666667],
        [0.03529412, 0.1254902 , 0.03137255]],

       [[0.04313726, 0.15686275, 0.03137255],
        [0.07843138, 0.1764706 , 0.06666667],
        [0.09411765, 0.16470589, 0.09411765],
        ...,
        [0.06666667, 0.21568628, 0.10196079],
        [0.03529412, 0.14509805, 0.05490196],
        [0.09411765, 0.21176471, 0.11764706]],

       [[0.23529412, 0.36078432, 0.23137255],
        [0.21960784, 0.34901962, 0.22745098],
        [0.2       , 0.3254902 , 0.21960784],
        ...,
        [0.15294118, 0.3254902 , 0.21568628],
        [0.24313726, 0.38431373, 0.29411766],
        [0.2509804 , 0.4       , 0.30588236]],

       ...,

       [[0.1882353 , 0.19215687, 0.12941177],
        [0.26666668, 0.27058825, 0.20784314],
        [0.2627451 , 0.26666668, 0.21176471],
        ...,
        [0.39215687, 0.52156866, 0.39607844],
        [0.36078432, 0.49019608, 0.35686275],
        [0.2901961 , 0.4117647 , 0.27058825]],

       [[0.23921569, 0.24313726, 0.18039216],
        [0.25490198, 0.25882354, 0.19607843],
        [0.23529412, 0.23921569, 0.18431373],
        ...,
        [0.24705882, 0.39607844, 0.2509804 ],
        [0.24705882, 0.39607844, 0.23921569],
        [0.24705882, 0.4       , 0.22352941]],

       [[0.25882354, 0.2627451 , 0.2       ],
        [0.22352941, 0.22745098, 0.16470589],
        [0.23137255, 0.23529412, 0.18039216],
        ...,
        [0.03137255, 0.16470589, 0.02745098],
        [0.04705882, 0.20392157, 0.03529412],
        [0.08235294, 0.24313726, 0.06666667]]], dtype=float32)
In [27]:
#Reshaping Data sets for using in KNN model

from builtins import range
from builtins import object

num_training = X_train.shape[0]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]

num_test = X_test.shape[0]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]

num_val = X_val.shape[0]
mask = list(range(num_val))
X_val = X_val[mask]
y_val = y_val[mask]

# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))

print("X_train: "+str(X_train.shape))
print("X_test: "+str(X_test.shape))
print("X_val: "+str(X_val.shape))
print("y_train: "+str(y_train.shape))
print("y_test: "+str(y_test.shape))
print("y_val: "+str(y_val.shape))
X_train: (1100, 30000)
X_test: (138, 30000)
X_val: (137, 30000)
y_train: (1100,)
y_test: (138,)
y_val: (137,)
In [47]:
print(y.view())
['0' '0' '0' ... '9' '9' '9']
In [ ]:
 

Image classification with KNN

KNN
  • For Flower species raw data, expectations is the flower patern remain close to each other and a KNN model will be able pick up the common features and group it together
  • k-nearest neighbor algorithm is for classifying objects based on closest training examples in the feature space. k-nearest neighbor algorithm is among the simplest of all machine learning algorithms. Training process for this algorithm only consists of storing feature vectors and labels of the training images. In the classification process, the unlabelled query point is simply assigned to the label of its k nearest neighbors.
  • A main advantage of the KNN algorithm is that it performs well with multi-modal classes because the basis of its decision is based on a small neighborhood of similar objects. Therefore, even if the target class is multi-modal, the algorithm can still lead to good accuracy.
In [48]:
from sklearn.metrics import accuracy_score,confusion_matrix,recall_score,f1_score,precision_score,roc_curve,log_loss,auc
from sklearn.neighbors import KNeighborsClassifier


#KNN Model with 1 neighbour

KnnModel = KNeighborsClassifier(n_neighbors=1)
KnnModel.fit(X_train,y_train)
y_predict=KnnModel.predict(X_test)
In [49]:
print('Accuracy score:',accuracy_score(y_test,y_predict))
print('confuion matrix:\n',confusion_matrix(y_test,y_predict))
Accuracy score: 0.32608695652173914
confuion matrix:
 [[1 1 0 0 0 1 2 0 0 0 0 0 1 0 0 0 0]
 [1 0 0 0 0 3 0 0 0 1 0 0 0 0 0 3 0]
 [0 1 6 0 0 0 0 0 1 0 0 0 0 0 0 0 0]
 [1 0 0 3 0 3 1 0 0 0 0 0 0 0 0 0 0]
 [1 0 0 0 3 0 2 0 0 0 0 0 0 0 0 0 0]
 [1 1 0 0 0 5 0 0 1 1 0 0 0 1 2 0 0]
 [0 0 0 1 3 0 3 0 0 0 0 0 0 0 2 0 0]
 [0 0 2 0 0 0 0 6 0 0 0 0 2 0 1 0 0]
 [0 0 0 1 0 0 0 1 1 0 0 1 0 1 1 0 0]
 [0 0 0 0 0 1 2 0 0 0 0 0 0 0 1 0 0]
 [0 2 0 0 0 5 0 0 0 1 2 0 0 0 2 0 0]
 [0 2 1 0 0 2 0 0 0 1 0 2 1 0 0 5 0]
 [0 0 0 0 0 1 1 0 0 0 0 0 4 0 0 0 0]
 [0 1 0 0 2 1 0 0 0 0 0 0 0 0 3 1 0]
 [0 0 0 0 3 0 1 0 0 1 0 0 0 0 3 0 0]
 [0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0]
 [0 0 0 0 1 1 1 0 0 0 0 0 1 0 0 0 5]]
  • With single neighbour we are able to acheive close to 30% accuracy, but 1 neighbour is higly volatile and wont give us generalised result
In [50]:
# Initializing the value of k and finding the accuracies on validation data
k_vals = range(1, 30, 2)
accuracies = []

for k in range(1, 30, 2):
  knn = KNeighborsClassifier(n_neighbors=k)
  knn.fit(X_train, y_train)
  score = knn.score(X_val, y_val)
  print("k value=%d, accuracy score=%.2f%%" % (k, score * 100))
  accuracies.append(score)
 
# finding the value of k which has the largest accuracy
i = int(np.argmax(accuracies))
print("k=%d value has highest accuracy of %.2f%% on validation data" % (k_vals[i],accuracies[i] * 100))
k value=1, accuracy score=35.04%
k value=3, accuracy score=29.93%
k value=5, accuracy score=29.93%
k value=7, accuracy score=32.12%
k value=9, accuracy score=30.66%
k value=11, accuracy score=28.47%
k value=13, accuracy score=27.74%
k value=15, accuracy score=29.20%
k value=17, accuracy score=29.93%
k value=19, accuracy score=28.47%
k value=21, accuracy score=28.47%
k value=23, accuracy score=28.47%
k value=25, accuracy score=25.55%
k value=27, accuracy score=27.01%
k value=29, accuracy score=26.28%
k=1 value has highest accuracy of 35.04% on validation data
  • Even though we got highest accuracies at 1 neighbour but we will go ahead with k=7 for more generalized approach which showed similar high accuracies on validation data set.
In [51]:
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(X_train, y_train)
predictions = knn.predict(X_test)
In [52]:
print("EVALUATION ON TESTING DATA")
print(confusion_matrix(y_test,predictions))
print(knn.score(X_test, y_test))
EVALUATION ON TESTING DATA
[[2 0 0 0 0 1 0 0 0 1 0 0 0 0 2 0 0]
 [0 0 0 0 0 4 0 0 0 0 2 0 0 0 1 1 0]
 [0 0 5 0 0 0 0 0 2 1 0 0 0 0 0 0 0]
 [1 0 0 0 1 5 0 0 0 0 0 0 1 0 0 0 0]
 [2 0 0 0 3 0 1 0 0 0 0 0 0 0 0 0 0]
 [2 0 0 1 0 5 0 0 0 2 1 0 0 0 1 0 0]
 [1 0 0 1 5 1 1 0 0 0 0 0 0 0 0 0 0]
 [0 0 5 0 0 1 0 1 0 1 0 0 3 0 0 0 0]
 [0 0 0 0 1 0 0 0 1 1 1 0 0 1 1 0 0]
 [0 1 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0]
 [0 0 0 0 0 5 1 0 0 3 2 0 0 0 1 0 0]
 [0 1 1 0 0 3 0 1 0 2 3 0 0 0 0 3 0]
 [0 0 0 0 1 1 0 0 0 0 0 0 4 0 0 0 0]
 [0 1 0 0 2 1 0 0 0 0 0 0 0 0 4 0 0]
 [0 0 0 0 3 1 2 0 0 1 0 0 0 0 1 0 0]
 [0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0]
 [0 0 0 0 0 2 1 0 0 0 0 0 1 0 0 0 5]]
0.2318840579710145
In [53]:
plt.figure(figsize=(2,2))
plt.imshow(X_test[59].reshape(img_height,img_width,3))
plt.show()
image = X_test[59]
print('Prediction:',knn.predict(image.reshape(1, -1)))
print('Actual:',y_test[59])
Prediction: ['9']
Actual: 9
In [54]:
plt.figure(figsize=(2,2))
plt.imshow(X_test[30].reshape(img_height,img_width,3))
plt.show()
image = X_test[30]
print('Prediction:',knn.predict(image.reshape(1, -1)))
print('Actual:',y_test[30])
Prediction: ['16']
Actual: 10
In [55]:
predictions = knn.predict(X_test)
print(classification_report(y_test, predictions))
              precision    recall  f1-score   support

           0       0.25      0.33      0.29         6
           1       0.00      0.00      0.00         8
          10       0.45      0.62      0.53         8
          11       0.00      0.00      0.00         8
          12       0.19      0.50      0.27         6
          13       0.16      0.42      0.23        12
          14       0.14      0.11      0.12         9
          15       0.50      0.09      0.15        11
          16       0.33      0.17      0.22         6
           2       0.08      0.25      0.12         4
           3       0.22      0.17      0.19        12
           4       0.00      0.00      0.00        14
           5       0.44      0.67      0.53         6
           6       0.00      0.00      0.00         8
           7       0.09      0.12      0.11         8
           8       0.20      0.33      0.25         3
           9       1.00      0.56      0.71         9

    accuracy                           0.23       138
   macro avg       0.24      0.26      0.22       138
weighted avg       0.24      0.23      0.21       138

In [56]:
print(knn.score(X_test, y_test))
0.2318840579710145
  • Accuracies from KNN are close to 23%.
  • We can observe on each classes precision and recall are very low.
  • Model is not able to identify and split relevent data from rest of the noise.
  • For flower species, expectations is the flower patern remain close to each other and a KNN model will be able pick up the common features and group it together.
  • While data analysis we noticed the images contains lot of noise or in few images actual flower is hardly cvering 5% of total pixels
  • Executed KNN from 1 to 30 neighbours and identified best values were at 1 neighbour and 25 neighbours. We chose for 7 neighbours so that we will get a more generalized prediction on classification.
  • With KNN our classification accuracies were close to 23% which are way below acceptable levels
  • Images with different angles are also getting mixed up with similar flower class creating a difference in prediction for KNN
  • We can observe there is an underlying pattern to the images for both raw pixel intensities and color. KNN is not capable enough to understand the differences and classifymore accurately.
In [ ]:
 
Issues with KNN
  • KNN depends on nearest neighbours, which might not be the best choice all the time. Observed the same issue during our evaluation proces as well
  • a major disadvantage of the KNN algorithm is that it uses all the features equally in computing for similarities. This can lead to classification errors, especially when there is only a small subset of features that are useful for classification.
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 

Image classification with Neural Network

In [57]:
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.2)
In [58]:
X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, random_state=42, test_size=0.5)
In [ ]:
 
  • Using One hot encoder to convert the categories to array formats
In [59]:
#from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import OneHotEncoder


one_hot_encoder = OneHotEncoder(sparse=False)
one_hot_encoder.fit(y_train.reshape(-1, 1))

y_train = one_hot_encoder.transform(y_train.reshape(-1, 1))
#y_train = pd.DataFrame(data=y_train, columns=one_hot_encoder.categories_)

y_test = one_hot_encoder.transform(y_test.reshape(-1, 1))
#y_test = pd.DataFrame(data=y_test, columns=one_hot_encoder.categories_)

y_val = one_hot_encoder.transform(y_val.reshape(-1, 1))
#y_val = pd.DataFrame(data=y_val, columns=one_hot_encoder.categories_)


print("Shape of y_train:", y_train.shape)

print("Shape of y_test:", y_test.shape)

print("Shape of y_val:", y_val.shape)
Shape of y_train: (1100, 17)
Shape of y_test: (138, 17)
Shape of y_val: (137, 17)
In [60]:
y_train
Out[60]:
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 1., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]])
In [61]:
y_test_cat = pd.DataFrame(data=y_test, columns=one_hot_encoder.categories_)
In [62]:
y_test_cat
Out[62]:
0 1 10 11 12 13 14 15 16 2 3 4 5 6 7 8 9
0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
4 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
133 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
134 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
135 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
136 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
137 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

138 rows × 17 columns

In [63]:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Activation
from tensorflow.keras import optimizers
from tensorflow.keras.layers import BatchNormalization, Dropout
In [ ]:
 
In [ ]:
 
  • created NN with less complexity as the results remained close and to reduce execution time as well
In [65]:
model = Sequential()
model.add(Flatten())
model.add(Dense(1000,kernel_initializer='he_normal'))
model.add(BatchNormalization())                    
model.add(Activation('relu')) 
model.add(Dense(500,kernel_initializer='he_normal'))
model.add(BatchNormalization())                    
model.add(Activation('relu'))  
model.add(Dense(250,kernel_initializer='he_normal'))
model.add(BatchNormalization())                    
model.add(Activation('relu'))
model.add(Dense(125,kernel_initializer='he_normal'))
model.add(BatchNormalization())                    
model.add(Activation('relu'))
model.add(Dense(34,kernel_initializer='he_normal'))
model.add(BatchNormalization())                    
model.add(Activation('relu'))
model.add(Dense(17,kernel_initializer='he_normal'))
model.add(Activation('softmax'))

#updating learning rate
adam = optimizers.Adam(lr=0.009, decay=1e-6)
# Compile the model
model.compile(loss="categorical_crossentropy", metrics=["accuracy"], optimizer="adam")

# Fit the model
history=model.fit(x=X_train, y=y_train, batch_size=batch_size, epochs=nb_epochs, validation_data=(X_val, y_val))
WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1635: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
Train on 1100 samples, validate on 137 samples
Epoch 1/25
1100/1100 [==============================] - 6s 5ms/sample - loss: 2.6300 - acc: 0.1845 - val_loss: 13.5710 - val_acc: 0.0657
Epoch 2/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 2.0542 - acc: 0.3700 - val_loss: 6.6736 - val_acc: 0.1022
Epoch 3/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 1.7562 - acc: 0.4936 - val_loss: 2.8070 - val_acc: 0.1971
Epoch 4/25
1100/1100 [==============================] - 7s 6ms/sample - loss: 1.4191 - acc: 0.5873 - val_loss: 2.4475 - val_acc: 0.1898
Epoch 5/25
1100/1100 [==============================] - 6s 5ms/sample - loss: 1.1651 - acc: 0.6900 - val_loss: 2.1098 - val_acc: 0.3431
Epoch 6/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.8880 - acc: 0.7873 - val_loss: 1.9841 - val_acc: 0.3723
Epoch 7/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.6742 - acc: 0.8491 - val_loss: 2.0046 - val_acc: 0.3869
Epoch 8/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.5015 - acc: 0.9082 - val_loss: 1.8879 - val_acc: 0.4380
Epoch 9/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.3765 - acc: 0.9382 - val_loss: 2.0605 - val_acc: 0.3723
Epoch 10/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.3040 - acc: 0.9527 - val_loss: 1.9703 - val_acc: 0.4088
Epoch 11/25
1100/1100 [==============================] - 6s 5ms/sample - loss: 0.2888 - acc: 0.9500 - val_loss: 1.9503 - val_acc: 0.4015
Epoch 12/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.2166 - acc: 0.9618 - val_loss: 2.2544 - val_acc: 0.3723
Epoch 13/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.2220 - acc: 0.9609 - val_loss: 2.1489 - val_acc: 0.3796
Epoch 14/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.1677 - acc: 0.9727 - val_loss: 2.2966 - val_acc: 0.3650
Epoch 15/25
1100/1100 [==============================] - 6s 5ms/sample - loss: 0.1909 - acc: 0.9627 - val_loss: 2.5031 - val_acc: 0.3358
Epoch 16/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.1202 - acc: 0.9773 - val_loss: 2.0976 - val_acc: 0.3869
Epoch 17/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.1043 - acc: 0.9855 - val_loss: 2.0540 - val_acc: 0.4599
Epoch 18/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.1069 - acc: 0.9782 - val_loss: 2.5402 - val_acc: 0.3431
Epoch 19/25
1100/1100 [==============================] - 6s 5ms/sample - loss: 0.0865 - acc: 0.9891 - val_loss: 1.9479 - val_acc: 0.4818
Epoch 20/25
1100/1100 [==============================] - 6s 5ms/sample - loss: 0.0766 - acc: 0.9891 - val_loss: 2.2686 - val_acc: 0.3796
Epoch 21/25
1100/1100 [==============================] - 6s 5ms/sample - loss: 0.0491 - acc: 0.9982 - val_loss: 2.1081 - val_acc: 0.4307
Epoch 22/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.0542 - acc: 0.9945 - val_loss: 2.4376 - val_acc: 0.3942
Epoch 23/25
1100/1100 [==============================] - 6s 5ms/sample - loss: 0.0724 - acc: 0.9827 - val_loss: 3.0650 - val_acc: 0.3504
Epoch 24/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.0846 - acc: 0.9782 - val_loss: 2.2603 - val_acc: 0.3869
Epoch 25/25
1100/1100 [==============================] - 5s 5ms/sample - loss: 0.1215 - acc: 0.9755 - val_loss: 2.7664 - val_acc: 0.3504
In [66]:
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()


plt.show()
<Figure size 432x288 with 0 Axes>
  • We can notice the train accuracies are increasing close to 100% but validation accuracies are around 40% only
  • We will try to augment the train data and train it again so that we will get less overfitted model
In [67]:
results = model.evaluate(X_test, y_test)
print('Accuracy: %f ' % (results[1]*100))
print('Loss: %f' % results[0])



Y_pred_test_cls = (model.predict(X_test) > 0.5).astype("int32")

plt.figure(figsize=(2,2))
plt.imshow(X_test[10].reshape(img_height,img_width,3))
plt.show()

print('Label - one hot encoded: \n',y_test_cat.iloc[10] )
print('Actual Label - one hot encoded:  ', y_test[10])
print('Predicted Label - one hot encoded: ',Y_pred_test_cls[10] )
138/138 [==============================] - 0s 1ms/sample - loss: 3.1033 - acc: 0.2971
Accuracy: 29.710144 
Loss: 3.103287
Label - one hot encoded: 
 0     0.0
1     0.0
10    0.0
11    0.0
12    1.0
13    0.0
14    0.0
15    0.0
16    0.0
2     0.0
3     0.0
4     0.0
5     0.0
6     0.0
7     0.0
8     0.0
9     0.0
Name: 10, dtype: float64
Actual Label - one hot encoded:   [0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Predicted Label - one hot encoded:  [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
  • Test data set accuracies are around 40%
In [ ]:
 
  • With Neural network we were able to improve test set predictions to close to 40%.
  • Even with NN we are struggling for image identification with high accuracies.
  • Epochs were limited at 25 only as we will be comparing the same with CNN on similar grounds. We may acheive a bit higher accuracies with more epochs.
  • NN is able to provide better result compared to KNN.
In [ ]:
 
In [ ]:
 
In [ ]:
 

Image Classification with CNN

In [28]:
from tensorflow.keras.models import Sequential  # initial NN
from tensorflow.keras.layers import Dense, Dropout # construct each layer
from tensorflow.keras.layers import Conv2D # swipe across the image by 1
from tensorflow.keras.layers import MaxPooling2D # swipe across by pool size
from tensorflow.keras.layers import Flatten, GlobalAveragePooling2D
from tensorflow import keras
In [29]:
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.2)

X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, random_state=42, test_size=0.5)
In [30]:
#from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import OneHotEncoder


one_hot_encoder = OneHotEncoder(sparse=False)
one_hot_encoder.fit(y_train.reshape(-1, 1))

y_train = one_hot_encoder.transform(y_train.reshape(-1, 1))
#y_train = pd.DataFrame(data=y_train, columns=one_hot_encoder.categories_)

y_test = one_hot_encoder.transform(y_test.reshape(-1, 1))
#y_test = pd.DataFrame(data=y_test, columns=one_hot_encoder.categories_)

y_val = one_hot_encoder.transform(y_val.reshape(-1, 1))
#y_val = pd.DataFrame(data=y_val, columns=one_hot_encoder.categories_)


print("Shape of y_train:", y_train.shape)

print("Shape of y_test:", y_test.shape)

print("Shape of y_val:", y_val.shape)
Shape of y_train: (1100, 17)
Shape of y_test: (138, 17)
Shape of y_val: (137, 17)
In [31]:
y_train
Out[31]:
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 1., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]])
In [32]:
y_test_cat = pd.DataFrame(data=y_test, columns=one_hot_encoder.categories_)
In [33]:
y_test_cat
Out[33]:
0 1 10 11 12 13 14 15 16 2 3 4 5 6 7 8 9
0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
4 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
133 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
134 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
135 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
136 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
137 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

138 rows × 17 columns

In [ ]:
 
In [74]:
model = Sequential()
model.add(Conv2D(64, (5, 5), activation='relu', input_shape=(img_height, img_width, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())



model.add(Dense(17, activation='softmax'))

model.summary()

# compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])


history=model.fit(x=X_train, y=y_train, batch_size=batch_size, epochs=nb_epochs, validation_data=(X_val, y_val))
history
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 96, 96, 64)        4864      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 48, 48, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 147456)            0         
_________________________________________________________________
dense_6 (Dense)              (None, 17)                2506769   
=================================================================
Total params: 2,511,633
Trainable params: 2,511,633
Non-trainable params: 0
_________________________________________________________________
Train on 1100 samples, validate on 137 samples
Epoch 1/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 4.1721 - acc: 0.1964 - val_loss: 2.1570 - val_acc: 0.2555
Epoch 2/25
1100/1100 [==============================] - 13s 12ms/sample - loss: 1.6503 - acc: 0.4745 - val_loss: 1.8851 - val_acc: 0.4088
Epoch 3/25
1100/1100 [==============================] - 12s 11ms/sample - loss: 1.0600 - acc: 0.6845 - val_loss: 1.6124 - val_acc: 0.4380
Epoch 4/25
1100/1100 [==============================] - 10s 10ms/sample - loss: 0.6163 - acc: 0.8345 - val_loss: 1.6192 - val_acc: 0.4453
Epoch 5/25
1100/1100 [==============================] - 10s 9ms/sample - loss: 0.2798 - acc: 0.9509 - val_loss: 1.5643 - val_acc: 0.5255
Epoch 6/25
1100/1100 [==============================] - 10s 9ms/sample - loss: 0.1414 - acc: 0.9827 - val_loss: 1.6645 - val_acc: 0.5036
Epoch 7/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0766 - acc: 0.9955 - val_loss: 1.6469 - val_acc: 0.5693
Epoch 8/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0308 - acc: 1.0000 - val_loss: 1.5803 - val_acc: 0.5109
Epoch 9/25
1100/1100 [==============================] - 10s 9ms/sample - loss: 0.0172 - acc: 1.0000 - val_loss: 1.6074 - val_acc: 0.5693
Epoch 10/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0123 - acc: 1.0000 - val_loss: 1.6519 - val_acc: 0.5620
Epoch 11/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0090 - acc: 1.0000 - val_loss: 1.6861 - val_acc: 0.5401
Epoch 12/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0072 - acc: 1.0000 - val_loss: 1.7331 - val_acc: 0.5109
Epoch 13/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0060 - acc: 1.0000 - val_loss: 1.7155 - val_acc: 0.5547
Epoch 14/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0050 - acc: 1.0000 - val_loss: 1.7522 - val_acc: 0.5474
Epoch 15/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0043 - acc: 1.0000 - val_loss: 1.8197 - val_acc: 0.5328
Epoch 16/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0037 - acc: 1.0000 - val_loss: 1.7535 - val_acc: 0.5328
Epoch 17/25
1100/1100 [==============================] - 12s 11ms/sample - loss: 0.0032 - acc: 1.0000 - val_loss: 1.7874 - val_acc: 0.5328
Epoch 18/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0029 - acc: 1.0000 - val_loss: 1.8045 - val_acc: 0.5401
Epoch 19/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0026 - acc: 1.0000 - val_loss: 1.8450 - val_acc: 0.5182
Epoch 20/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0023 - acc: 1.0000 - val_loss: 1.8242 - val_acc: 0.5547
Epoch 21/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0021 - acc: 1.0000 - val_loss: 1.8392 - val_acc: 0.5328
Epoch 22/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0019 - acc: 1.0000 - val_loss: 1.8550 - val_acc: 0.5401
Epoch 23/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0017 - acc: 1.0000 - val_loss: 1.8783 - val_acc: 0.5328
Epoch 24/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0016 - acc: 1.0000 - val_loss: 1.8903 - val_acc: 0.5401
Epoch 25/25
1100/1100 [==============================] - 11s 10ms/sample - loss: 0.0014 - acc: 1.0000 - val_loss: 1.8824 - val_acc: 0.5474
Out[74]:
<tensorflow.python.keras.callbacks.History at 0x18e9d2dd188>
In [75]:
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()


plt.show()
<Figure size 432x288 with 0 Axes>
  • With a very basic CNN model also we can see the validation accuracies are close to 52%
In [76]:
results = model.evaluate(X_test, y_test)
print('Accuracy: %f ' % (results[1]*100))
print('Loss: %f' % results[0])



Y_pred_test_cls = (model.predict(X_test) > 0.5).astype("int32")

plt.figure(figsize=(2,2))
plt.imshow(X_test[30].reshape(img_height,img_width,3))
plt.show()

print('Label - one hot encoded: \n',y_test_cat.iloc[30] )
print('Actual Label - one hot encoded:  ', y_test[30])
print('Predicted Label - one hot encoded: ',Y_pred_test_cls[30] )
138/138 [==============================] - 0s 3ms/sample - loss: 1.9848 - acc: 0.5435
Accuracy: 54.347825 
Loss: 1.984786
Label - one hot encoded: 
 0     0.0
1     0.0
10    1.0
11    0.0
12    0.0
13    0.0
14    0.0
15    0.0
16    0.0
2     0.0
3     0.0
4     0.0
5     0.0
6     0.0
7     0.0
8     0.0
9     0.0
Name: 30, dtype: float64
Actual Label - one hot encoded:   [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Predicted Label - one hot encoded:  [0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]
  • Adding few additional layers in model for it to capture more features
In [78]:
model = Sequential()
model.add(Conv2D(100, (5, 5), activation='relu', input_shape=(img_height, img_width, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(BatchNormalization()) 

model.add(Conv2D(filters=128, kernel_size=4, padding='same', activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(BatchNormalization()) 

model.add(Conv2D(filters=128, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.4))

model.add(Conv2D(filters=256, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))

model.add(Flatten())
model.add(Dense(17, activation='softmax'))

model.summary()

#updating learning rate
adam = optimizers.Adam(lr=0.009, decay=1e-6)

# compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

#Saving the best model using model checkpoint callback
model_checkpoint=keras.callbacks.ModelCheckpoint('Flowerspecies_CNN_model.h5', #where to save the model
                                                    save_best_only=True, 
                                                    monitor='val_accuracy', 
                                                    mode='max', 
                                                    verbose=1)

history=model.fit(x=X_train, y=y_train, 
                  batch_size=batch_size, 
                  epochs=nb_epochs, 
                  validation_data=(X_val, y_val))
                  #callbacks = [model_checkpoint])
history
Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_5 (Conv2D)            (None, 96, 96, 100)       7600      
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 48, 48, 100)       0         
_________________________________________________________________
batch_normalization_7 (Batch (None, 48, 48, 100)       400       
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 48, 48, 128)       204928    
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 24, 24, 128)       0         
_________________________________________________________________
batch_normalization_8 (Batch (None, 24, 24, 128)       512       
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 24, 24, 128)       147584    
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 12, 12, 128)       0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 12, 12, 128)       0         
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 12, 12, 256)       295168    
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 6, 6, 256)         0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 6, 6, 256)         0         
_________________________________________________________________
flatten_3 (Flatten)          (None, 9216)              0         
_________________________________________________________________
dense_8 (Dense)              (None, 17)                156689    
=================================================================
Total params: 812,881
Trainable params: 812,425
Non-trainable params: 456
_________________________________________________________________
Train on 1100 samples, validate on 137 samples
Epoch 1/25
1100/1100 [==============================] - 67s 61ms/sample - loss: 2.9771 - acc: 0.1782 - val_loss: 2.6675 - val_acc: 0.1022
Epoch 2/25
1100/1100 [==============================] - 69s 63ms/sample - loss: 1.8149 - acc: 0.3845 - val_loss: 2.7551 - val_acc: 0.1533
Epoch 3/25
1100/1100 [==============================] - 70s 64ms/sample - loss: 1.3932 - acc: 0.5318 - val_loss: 3.1896 - val_acc: 0.1095
Epoch 4/25
1100/1100 [==============================] - 69s 63ms/sample - loss: 1.2257 - acc: 0.5891 - val_loss: 4.7905 - val_acc: 0.0584
Epoch 5/25
1100/1100 [==============================] - 70s 63ms/sample - loss: 1.0153 - acc: 0.6645 - val_loss: 4.9916 - val_acc: 0.0511
Epoch 6/25
1100/1100 [==============================] - 70s 64ms/sample - loss: 0.9172 - acc: 0.6891 - val_loss: 3.5504 - val_acc: 0.0949
Epoch 7/25
1100/1100 [==============================] - 69s 62ms/sample - loss: 0.7467 - acc: 0.7555 - val_loss: 3.3746 - val_acc: 0.1606
Epoch 8/25
1100/1100 [==============================] - 69s 63ms/sample - loss: 0.6372 - acc: 0.7673 - val_loss: 2.3839 - val_acc: 0.2993
Epoch 9/25
1100/1100 [==============================] - 71s 64ms/sample - loss: 0.5673 - acc: 0.8127 - val_loss: 2.5149 - val_acc: 0.3358
Epoch 10/25
1100/1100 [==============================] - 71s 64ms/sample - loss: 0.4393 - acc: 0.8491 - val_loss: 2.2613 - val_acc: 0.4526
Epoch 11/25
1100/1100 [==============================] - 70s 64ms/sample - loss: 0.3534 - acc: 0.8836 - val_loss: 1.9360 - val_acc: 0.5328
Epoch 12/25
1100/1100 [==============================] - 70s 64ms/sample - loss: 0.4358 - acc: 0.8518 - val_loss: 1.7737 - val_acc: 0.5693
Epoch 13/25
1100/1100 [==============================] - 71s 64ms/sample - loss: 0.3383 - acc: 0.8709 - val_loss: 1.7180 - val_acc: 0.6131
Epoch 14/25
1100/1100 [==============================] - 70s 63ms/sample - loss: 0.3424 - acc: 0.9018 - val_loss: 1.9128 - val_acc: 0.6350
Epoch 15/25
1100/1100 [==============================] - 69s 63ms/sample - loss: 0.2249 - acc: 0.9227 - val_loss: 1.9864 - val_acc: 0.5912
Epoch 16/25
1100/1100 [==============================] - 73s 66ms/sample - loss: 0.2182 - acc: 0.9209 - val_loss: 2.5885 - val_acc: 0.5474
Epoch 17/25
1100/1100 [==============================] - 71s 64ms/sample - loss: 0.1931 - acc: 0.9355 - val_loss: 2.8657 - val_acc: 0.4891
Epoch 18/25
1100/1100 [==============================] - 71s 65ms/sample - loss: 0.2096 - acc: 0.9282 - val_loss: 2.8506 - val_acc: 0.5328
Epoch 19/25
1100/1100 [==============================] - 79s 72ms/sample - loss: 0.2660 - acc: 0.9136 - val_loss: 3.3605 - val_acc: 0.5912
Epoch 20/25
1100/1100 [==============================] - 70s 64ms/sample - loss: 0.2280 - acc: 0.9245 - val_loss: 3.1978 - val_acc: 0.5766
Epoch 21/25
1100/1100 [==============================] - 69s 63ms/sample - loss: 0.1852 - acc: 0.9445 - val_loss: 3.3468 - val_acc: 0.5328
Epoch 22/25
1100/1100 [==============================] - 70s 63ms/sample - loss: 0.2223 - acc: 0.9373 - val_loss: 3.2091 - val_acc: 0.5620
Epoch 23/25
1100/1100 [==============================] - 70s 64ms/sample - loss: 0.1827 - acc: 0.9318 - val_loss: 2.4389 - val_acc: 0.6350
Epoch 24/25
1100/1100 [==============================] - 70s 64ms/sample - loss: 0.2120 - acc: 0.9382 - val_loss: 2.8978 - val_acc: 0.5547
Epoch 25/25
1100/1100 [==============================] - 70s 64ms/sample - loss: 0.1790 - acc: 0.9445 - val_loss: 3.0658 - val_acc: 0.5474
Out[78]:
<tensorflow.python.keras.callbacks.History at 0x18edcaa7648>
In [79]:
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()


plt.show()
<Figure size 432x288 with 0 Axes>
  • We can observe now the validation accuracies are increased closed to 60%
  • still there is a noticable gap betweeen train and validation accuracies, will try to close the gap in further models
In [80]:
results = model.evaluate(X_test, y_test)
print('Accuracy: %f ' % (results[1]*100))
print('Loss: %f' % results[0])
138/138 [==============================] - 2s 12ms/sample - loss: 3.1156 - acc: 0.5725
Accuracy: 57.246375 
Loss: 3.115600
- Test data set accuracies are immproved closed to 60%
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 

Always save the model and its weights after training

In [81]:
model.save('./Flower_Species_Classifier_CNN.h5')

model.save_weights('./Flower_Species_Classifier_weights_CNN.h5')
  • Usinng Image Generator to increase and improve the data set to add varations
In [37]:
datagen= keras.preprocessing.image.ImageDataGenerator(rotation_range=30,
                                                      width_shift_range=0.3,
                                                      height_shift_range=0.3,
                                                      zoom_range=[0.4,1.5],
                                                      horizontal_flip=True,
                                                      vertical_flip=True)

datagen.fit(X_train)
In [87]:
model = Sequential()
model.add(Conv2D(100, (5, 5), activation='relu', input_shape=(img_height, img_width, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(BatchNormalization()) 

model.add(Conv2D(filters=128, kernel_size=4, padding='same', activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(BatchNormalization()) 

model.add(Conv2D(filters=128, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.4))

model.add(Conv2D(filters=256, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.4))

model.add(Flatten())
model.add(Dense(17, activation='softmax'))

model.summary()

#updating learning rate
adam = optimizers.Adam(lr=0.001, decay=1e-6)

# compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

#Saving the best model using model checkpoint callback
model_checkpoint=keras.callbacks.ModelCheckpoint('Flowerspecies_CNN_model.h5', #where to save the model
                                                    save_best_only=True, 
                                                    monitor='val_accuracy', 
                                                    mode='max', 
                                                    verbose=1)


history= model.fit_generator(datagen.flow(X_train, y_train, batch_size=32),  
                  epochs=nb_epochs, 
                  validation_data=(X_val, y_val))
                  #callbacks = [model_checkpoint])
history
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_13 (Conv2D)           (None, 96, 96, 100)       7600      
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 48, 48, 100)       0         
_________________________________________________________________
batch_normalization_11 (Batc (None, 48, 48, 100)       400       
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 48, 48, 128)       204928    
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 24, 24, 128)       0         
_________________________________________________________________
batch_normalization_12 (Batc (None, 24, 24, 128)       512       
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 24, 24, 128)       147584    
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 12, 12, 128)       0         
_________________________________________________________________
dropout_6 (Dropout)          (None, 12, 12, 128)       0         
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 12, 12, 256)       295168    
_________________________________________________________________
max_pooling2d_16 (MaxPooling (None, 6, 6, 256)         0         
_________________________________________________________________
dropout_7 (Dropout)          (None, 6, 6, 256)         0         
_________________________________________________________________
flatten_5 (Flatten)          (None, 9216)              0         
_________________________________________________________________
dense_10 (Dense)             (None, 17)                156689    
=================================================================
Total params: 812,881
Trainable params: 812,425
Non-trainable params: 456
_________________________________________________________________
Epoch 1/25
34/35 [============================>.] - ETA: 1s - loss: 3.1028 - acc: 0.1180Epoch 1/25
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 2.7699 - acc: 0.0657
35/35 [==============================] - 69s 2s/step - loss: 3.0812 - acc: 0.1218 - val_loss: 2.7657 - val_acc: 0.0657
Epoch 2/25
34/35 [============================>.] - ETA: 1s - loss: 2.3598 - acc: 0.2125Epoch 1/25
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 2.6778 - acc: 0.1533
35/35 [==============================] - 71s 2s/step - loss: 2.3572 - acc: 0.2164 - val_loss: 2.6907 - val_acc: 0.1533
Epoch 3/25
34/35 [============================>.] - ETA: 1s - loss: 2.1958 - acc: 0.2594Epoch 1/25
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 3.1371 - acc: 0.0876
35/35 [==============================] - 70s 2s/step - loss: 2.1937 - acc: 0.2582 - val_loss: 3.3109 - val_acc: 0.0876
Epoch 4/25
34/35 [============================>.] - ETA: 1s - loss: 2.1218 - acc: 0.2987Epoch 1/25
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 2.9751 - acc: 0.1022
35/35 [==============================] - 71s 2s/step - loss: 2.1180 - acc: 0.3009 - val_loss: 3.1244 - val_acc: 0.1022
Epoch 5/25
34/35 [============================>.] - ETA: 1s - loss: 1.9551 - acc: 0.3530Epoch 1/25
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 3.1275 - acc: 0.0584
35/35 [==============================] - 72s 2s/step - loss: 1.9560 - acc: 0.3518 - val_loss: 3.2166 - val_acc: 0.0584
Epoch 6/25
34/35 [============================>.] - ETA: 1s - loss: 1.9546 - acc: 0.3464Epoch 1/25
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 2.9414 - acc: 0.0730
35/35 [==============================] - 72s 2s/step - loss: 1.9545 - acc: 0.3418 - val_loss: 3.0870 - val_acc: 0.0730
Epoch 7/25
34/35 [============================>.] - ETA: 1s - loss: 1.9117 - acc: 0.3343Epoch 1/25
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 2.4456 - acc: 0.1679
35/35 [==============================] - 71s 2s/step - loss: 1.9095 - acc: 0.3355 - val_loss: 2.4255 - val_acc: 0.1679
Epoch 8/25
34/35 [============================>.] - ETA: 2s - loss: 1.9581 - acc: 0.3549Epoch 1/25
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 2.4781 - acc: 0.1606
35/35 [==============================] - 73s 2s/step - loss: 1.9700 - acc: 0.3518 - val_loss: 2.4812 - val_acc: 0.1606
Epoch 9/25
34/35 [============================>.] - ETA: 2s - loss: 1.8447 - acc: 0.3558Epoch 1/25
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 2.4505 - acc: 0.2409
35/35 [==============================] - 72s 2s/step - loss: 1.8521 - acc: 0.3536 - val_loss: 2.4685 - val_acc: 0.2409
Epoch 10/25
34/35 [============================>.] - ETA: 1s - loss: 1.8280 - acc: 0.3989Epoch 1/25
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 2.2418 - acc: 0.2190
35/35 [==============================] - 71s 2s/step - loss: 1.8392 - acc: 0.3945 - val_loss: 2.2981 - val_acc: 0.2190
Epoch 11/25
34/35 [============================>.] - ETA: 2s - loss: 1.7572 - acc: 0.3942Epoch 1/25
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 2.0580 - acc: 0.2774
35/35 [==============================] - 74s 2s/step - loss: 1.7471 - acc: 0.3964 - val_loss: 2.0755 - val_acc: 0.2774
Epoch 12/25
34/35 [============================>.] - ETA: 1s - loss: 1.7938 - acc: 0.4026Epoch 1/25
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 2.0177 - acc: 0.3869
35/35 [==============================] - 72s 2s/step - loss: 1.7916 - acc: 0.4009 - val_loss: 1.9216 - val_acc: 0.3869
Epoch 13/25
34/35 [============================>.] - ETA: 1s - loss: 1.7209 - acc: 0.4204Epoch 1/25
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.7011 - acc: 0.5036
35/35 [==============================] - 72s 2s/step - loss: 1.7267 - acc: 0.4236 - val_loss: 1.6069 - val_acc: 0.5036
Epoch 14/25
34/35 [============================>.] - ETA: 1s - loss: 1.7331 - acc: 0.4129Epoch 1/25
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 2.1360 - acc: 0.3577
35/35 [==============================] - 72s 2s/step - loss: 1.7436 - acc: 0.4127 - val_loss: 2.0509 - val_acc: 0.3577
Epoch 15/25
34/35 [============================>.] - ETA: 1s - loss: 1.6884 - acc: 0.4026Epoch 1/25
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.7449 - acc: 0.3942
35/35 [==============================] - 71s 2s/step - loss: 1.6999 - acc: 0.3991 - val_loss: 1.6642 - val_acc: 0.3942
Epoch 16/25
34/35 [============================>.] - ETA: 2s - loss: 1.8037 - acc: 0.3942Epoch 1/25
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.8542 - acc: 0.4088
35/35 [==============================] - 73s 2s/step - loss: 1.7985 - acc: 0.3955 - val_loss: 1.7414 - val_acc: 0.4088
Epoch 17/25
34/35 [============================>.] - ETA: 1s - loss: 1.6664 - acc: 0.4541Epoch 1/25
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.6636 - acc: 0.4380
35/35 [==============================] - 71s 2s/step - loss: 1.6616 - acc: 0.4536 - val_loss: 1.5963 - val_acc: 0.4380
Epoch 18/25
34/35 [============================>.] - ETA: 1s - loss: 1.6293 - acc: 0.4391Epoch 1/25
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.8294 - acc: 0.4818
35/35 [==============================] - 71s 2s/step - loss: 1.6375 - acc: 0.4336 - val_loss: 1.6485 - val_acc: 0.4818
Epoch 19/25
34/35 [============================>.] - ETA: 1s - loss: 1.6848 - acc: 0.4204Epoch 1/25
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.9324 - acc: 0.4891
35/35 [==============================] - 71s 2s/step - loss: 1.6883 - acc: 0.4218 - val_loss: 1.6897 - val_acc: 0.4891
Epoch 20/25
34/35 [============================>.] - ETA: 1s - loss: 1.6352 - acc: 0.4560Epoch 1/25
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 2.1105 - acc: 0.4015
35/35 [==============================] - 72s 2s/step - loss: 1.6372 - acc: 0.4564 - val_loss: 1.8949 - val_acc: 0.4015
Epoch 21/25
34/35 [============================>.] - ETA: 1s - loss: 1.6412 - acc: 0.4719Epoch 1/25
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.7625 - acc: 0.4453
35/35 [==============================] - 72s 2s/step - loss: 1.6291 - acc: 0.4764 - val_loss: 1.7254 - val_acc: 0.4453
Epoch 22/25
34/35 [============================>.] - ETA: 1s - loss: 1.5675 - acc: 0.4747Epoch 1/25
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.6830 - acc: 0.4964
35/35 [==============================] - 72s 2s/step - loss: 1.5706 - acc: 0.4709 - val_loss: 1.6202 - val_acc: 0.4964
Epoch 23/25
34/35 [============================>.] - ETA: 2s - loss: 1.5321 - acc: 0.4770Epoch 1/25
137/35 [=====================================================================================================================] - 3s 19ms/sample - loss: 1.3433 - acc: 0.4891
35/35 [==============================] - 72s 2s/step - loss: 1.5238 - acc: 0.4782 - val_loss: 1.4930 - val_acc: 0.4891
Epoch 24/25
34/35 [============================>.] - ETA: 1s - loss: 1.5258 - acc: 0.4616Epoch 1/25
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.7526 - acc: 0.4599
35/35 [==============================] - 71s 2s/step - loss: 1.5349 - acc: 0.4609 - val_loss: 1.5807 - val_acc: 0.4599
Epoch 25/25
34/35 [============================>.] - ETA: 1s - loss: 1.5277 - acc: 0.4869Epoch 1/25
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.4398 - acc: 0.5912
35/35 [==============================] - 71s 2s/step - loss: 1.5248 - acc: 0.4836 - val_loss: 1.3153 - val_acc: 0.5912
Out[87]:
<tensorflow.python.keras.callbacks.History at 0x18ef0f4cd88>
In [88]:
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()


plt.show()
<Figure size 432x288 with 0 Axes>
In [89]:
results = model.evaluate(X_test, y_test)
print('Accuracy: %f ' % (results[1]*100))
print('Loss: %f' % results[0])
138/138 [==============================] - 2s 16ms/sample - loss: 1.3097 - acc: 0.5725
Accuracy: 57.246375 
Loss: 1.309677
In [90]:
model.save('./Flower_Species_Classifier_CNN_Augmented.h5')

model.save_weights('./Flower_Species_Classifier_weights_CNN_Augmented.h5')
In [91]:
#100 epochs
In [93]:
history= model.fit_generator(datagen.flow(X_train, y_train, batch_size=32),  
                  epochs=100, 
                  validation_data=(X_val, y_val))
                  #callbacks = [model_checkpoint])
history
Epoch 1/100
34/35 [============================>.] - ETA: 1s - loss: 1.3451 - acc: 0.5468Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.1373 - acc: 0.5547
35/35 [==============================] - 69s 2s/step - loss: 1.3386 - acc: 0.5491 - val_loss: 1.1895 - val_acc: 0.5547
Epoch 2/100
34/35 [============================>.] - ETA: 1s - loss: 1.3042 - acc: 0.5449Epoch 1/100
137/35 [=====================================================================================================================] - 3s 19ms/sample - loss: 1.1728 - acc: 0.5766
35/35 [==============================] - 71s 2s/step - loss: 1.2997 - acc: 0.5482 - val_loss: 1.2855 - val_acc: 0.5766
Epoch 3/100
34/35 [============================>.] - ETA: 2s - loss: 1.3038 - acc: 0.5655Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.2002 - acc: 0.5255
35/35 [==============================] - 73s 2s/step - loss: 1.3114 - acc: 0.5627 - val_loss: 1.3243 - val_acc: 0.5255
Epoch 4/100
34/35 [============================>.] - ETA: 2s - loss: 1.2857 - acc: 0.5468Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.3304 - acc: 0.5693
35/35 [==============================] - 74s 2s/step - loss: 1.2888 - acc: 0.5445 - val_loss: 1.3048 - val_acc: 0.5693
Epoch 5/100
34/35 [============================>.] - ETA: 2s - loss: 1.2955 - acc: 0.5618Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.1089 - acc: 0.5985
35/35 [==============================] - 74s 2s/step - loss: 1.2943 - acc: 0.5618 - val_loss: 1.2260 - val_acc: 0.5985
Epoch 6/100
34/35 [============================>.] - ETA: 1s - loss: 1.3041 - acc: 0.5543Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.1833 - acc: 0.5547
35/35 [==============================] - 71s 2s/step - loss: 1.3103 - acc: 0.5518 - val_loss: 1.3169 - val_acc: 0.5547
Epoch 7/100
34/35 [============================>.] - ETA: 1s - loss: 1.2628 - acc: 0.5730Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.2705 - acc: 0.5766
35/35 [==============================] - 72s 2s/step - loss: 1.2614 - acc: 0.5745 - val_loss: 1.3540 - val_acc: 0.5766
Epoch 8/100
34/35 [============================>.] - ETA: 2s - loss: 1.2522 - acc: 0.5655Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.0901 - acc: 0.6131
35/35 [==============================] - 73s 2s/step - loss: 1.2555 - acc: 0.5673 - val_loss: 1.1502 - val_acc: 0.6131
Epoch 9/100
34/35 [============================>.] - ETA: 2s - loss: 1.2127 - acc: 0.5890Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.1441 - acc: 0.5912
35/35 [==============================] - 72s 2s/step - loss: 1.2383 - acc: 0.5836 - val_loss: 1.2040 - val_acc: 0.5912
Epoch 10/100
34/35 [============================>.] - ETA: 2s - loss: 1.2168 - acc: 0.5871Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.2683 - acc: 0.5328
35/35 [==============================] - 76s 2s/step - loss: 1.2315 - acc: 0.5864 - val_loss: 1.3467 - val_acc: 0.5328
Epoch 11/100
34/35 [============================>.] - ETA: 2s - loss: 1.2215 - acc: 0.5964Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.1857 - acc: 0.6277
35/35 [==============================] - 73s 2s/step - loss: 1.2128 - acc: 0.5982 - val_loss: 1.2653 - val_acc: 0.6277
Epoch 12/100
34/35 [============================>.] - ETA: 2s - loss: 1.2501 - acc: 0.5871Epoch 1/100
137/35 [=====================================================================================================================] - 3s 21ms/sample - loss: 1.1991 - acc: 0.5474
35/35 [==============================] - 74s 2s/step - loss: 1.2456 - acc: 0.5873 - val_loss: 1.2719 - val_acc: 0.5474
Epoch 13/100
34/35 [============================>.] - ETA: 1s - loss: 1.1516 - acc: 0.6011Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 3.3111 - acc: 0.3650
35/35 [==============================] - 72s 2s/step - loss: 1.1591 - acc: 0.5964 - val_loss: 3.0307 - val_acc: 0.3650
Epoch 14/100
34/35 [============================>.] - ETA: 1s - loss: 1.2457 - acc: 0.5768Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 2.4133 - acc: 0.4380
35/35 [==============================] - 71s 2s/step - loss: 1.2417 - acc: 0.5773 - val_loss: 2.1752 - val_acc: 0.4380
Epoch 15/100
34/35 [============================>.] - ETA: 2s - loss: 1.2483 - acc: 0.5852Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.2630 - acc: 0.5766
35/35 [==============================] - 73s 2s/step - loss: 1.2425 - acc: 0.5845 - val_loss: 1.2966 - val_acc: 0.5766
Epoch 16/100
34/35 [============================>.] - ETA: 2s - loss: 1.1907 - acc: 0.5908Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.1054 - acc: 0.5620
35/35 [==============================] - 74s 2s/step - loss: 1.1863 - acc: 0.5918 - val_loss: 1.1548 - val_acc: 0.5620
Epoch 17/100
34/35 [============================>.] - ETA: 1s - loss: 1.1913 - acc: 0.5983Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.2292 - acc: 0.5401
35/35 [==============================] - 72s 2s/step - loss: 1.1967 - acc: 0.5964 - val_loss: 1.3799 - val_acc: 0.5401
Epoch 18/100
34/35 [============================>.] - ETA: 2s - loss: 1.1986 - acc: 0.5824Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.2208 - acc: 0.6350
35/35 [==============================] - 73s 2s/step - loss: 1.2053 - acc: 0.5800 - val_loss: 1.2365 - val_acc: 0.6350
Epoch 19/100
34/35 [============================>.] - ETA: 2s - loss: 1.2201 - acc: 0.5833Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.7307 - acc: 0.5182
35/35 [==============================] - 73s 2s/step - loss: 1.2187 - acc: 0.5827 - val_loss: 1.6582 - val_acc: 0.5182
Epoch 20/100
34/35 [============================>.] - ETA: 2s - loss: 1.2943 - acc: 0.5749Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.0515 - acc: 0.6569
35/35 [==============================] - 73s 2s/step - loss: 1.2821 - acc: 0.5773 - val_loss: 1.1271 - val_acc: 0.6569
Epoch 21/100
34/35 [============================>.] - ETA: 1s - loss: 1.1674 - acc: 0.5908Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.2258 - acc: 0.6131
35/35 [==============================] - 72s 2s/step - loss: 1.1737 - acc: 0.5918 - val_loss: 1.2200 - val_acc: 0.6131
Epoch 22/100
34/35 [============================>.] - ETA: 1s - loss: 1.1725 - acc: 0.5983Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.1188 - acc: 0.6204
35/35 [==============================] - 72s 2s/step - loss: 1.1819 - acc: 0.5945 - val_loss: 1.1572 - val_acc: 0.6204
Epoch 23/100
34/35 [============================>.] - ETA: 2s - loss: 1.1388 - acc: 0.6096Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.1209 - acc: 0.5620
35/35 [==============================] - 73s 2s/step - loss: 1.1373 - acc: 0.6091 - val_loss: 1.1979 - val_acc: 0.5620
Epoch 24/100
34/35 [============================>.] - ETA: 2s - loss: 1.0593 - acc: 0.6433Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.3227 - acc: 0.5839
35/35 [==============================] - 77s 2s/step - loss: 1.0581 - acc: 0.6418 - val_loss: 1.3178 - val_acc: 0.5839
Epoch 25/100
34/35 [============================>.] - ETA: 2s - loss: 1.1481 - acc: 0.6021Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.0859 - acc: 0.6204
35/35 [==============================] - 74s 2s/step - loss: 1.1555 - acc: 0.6009 - val_loss: 1.1804 - val_acc: 0.6204
Epoch 26/100
34/35 [============================>.] - ETA: 2s - loss: 1.1670 - acc: 0.6002Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.4181 - acc: 0.5985
35/35 [==============================] - 73s 2s/step - loss: 1.1602 - acc: 0.6018 - val_loss: 1.3188 - val_acc: 0.5985
Epoch 27/100
34/35 [============================>.] - ETA: 2s - loss: 1.0656 - acc: 0.6152Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 0.9464 - acc: 0.6569
35/35 [==============================] - 75s 2s/step - loss: 1.0753 - acc: 0.6164 - val_loss: 1.0178 - val_acc: 0.6569
Epoch 28/100
34/35 [============================>.] - ETA: 2s - loss: 1.0738 - acc: 0.6301Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.0784 - acc: 0.5766
35/35 [==============================] - 74s 2s/step - loss: 1.0801 - acc: 0.6282 - val_loss: 1.1362 - val_acc: 0.5766
Epoch 29/100
34/35 [============================>.] - ETA: 1s - loss: 1.1580 - acc: 0.6096Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.1212 - acc: 0.5912
35/35 [==============================] - 72s 2s/step - loss: 1.1565 - acc: 0.6082 - val_loss: 1.1476 - val_acc: 0.5912
Epoch 30/100
34/35 [============================>.] - ETA: 1s - loss: 1.0891 - acc: 0.6311Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 0.8120 - acc: 0.6569
35/35 [==============================] - 73s 2s/step - loss: 1.0864 - acc: 0.6309 - val_loss: 0.9526 - val_acc: 0.6569
Epoch 31/100
34/35 [============================>.] - ETA: 1s - loss: 1.0786 - acc: 0.6283Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.0373 - acc: 0.6204
35/35 [==============================] - 72s 2s/step - loss: 1.0778 - acc: 0.6264 - val_loss: 1.0767 - val_acc: 0.6204
Epoch 32/100
34/35 [============================>.] - ETA: 1s - loss: 1.0929 - acc: 0.6386Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.0516 - acc: 0.6569
35/35 [==============================] - 72s 2s/step - loss: 1.1032 - acc: 0.6336 - val_loss: 1.0688 - val_acc: 0.6569
Epoch 33/100
34/35 [============================>.] - ETA: 2s - loss: 1.0117 - acc: 0.6517Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.0314 - acc: 0.6788
35/35 [==============================] - 72s 2s/step - loss: 1.0233 - acc: 0.6464 - val_loss: 1.0092 - val_acc: 0.6788
Epoch 34/100
34/35 [============================>.] - ETA: 2s - loss: 1.0315 - acc: 0.6358Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 2.2935 - acc: 0.4307
35/35 [==============================] - 73s 2s/step - loss: 1.0283 - acc: 0.6364 - val_loss: 1.9892 - val_acc: 0.4307
Epoch 35/100
34/35 [============================>.] - ETA: 1s - loss: 1.0714 - acc: 0.6386Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 0.9522 - acc: 0.6642
35/35 [==============================] - 73s 2s/step - loss: 1.0640 - acc: 0.6418 - val_loss: 1.0613 - val_acc: 0.6642
Epoch 36/100
34/35 [============================>.] - ETA: 2s - loss: 1.0446 - acc: 0.6489Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.2383 - acc: 0.6642
35/35 [==============================] - 75s 2s/step - loss: 1.0467 - acc: 0.6455 - val_loss: 1.2722 - val_acc: 0.6642
Epoch 37/100
34/35 [============================>.] - ETA: 1s - loss: 1.0003 - acc: 0.6554Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 0.9656 - acc: 0.6715
35/35 [==============================] - 72s 2s/step - loss: 0.9979 - acc: 0.6527 - val_loss: 1.0135 - val_acc: 0.6715
Epoch 38/100
34/35 [============================>.] - ETA: 1s - loss: 1.0782 - acc: 0.6273Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 0.7784 - acc: 0.6788
35/35 [==============================] - 72s 2s/step - loss: 1.0755 - acc: 0.6282 - val_loss: 0.9471 - val_acc: 0.6788
Epoch 39/100
34/35 [============================>.] - ETA: 2s - loss: 1.0224 - acc: 0.6433Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 0.9028 - acc: 0.7372
35/35 [==============================] - 73s 2s/step - loss: 1.0216 - acc: 0.6418 - val_loss: 0.9996 - val_acc: 0.7372
Epoch 40/100
34/35 [============================>.] - ETA: 2s - loss: 1.0668 - acc: 0.6498Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.4636 - acc: 0.5401
35/35 [==============================] - 73s 2s/step - loss: 1.0696 - acc: 0.6500 - val_loss: 1.5286 - val_acc: 0.5401
Epoch 41/100
34/35 [============================>.] - ETA: 2s - loss: 1.0263 - acc: 0.6423Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.0423 - acc: 0.6642
35/35 [==============================] - 73s 2s/step - loss: 1.0244 - acc: 0.6400 - val_loss: 1.1143 - val_acc: 0.6642
Epoch 42/100
34/35 [============================>.] - ETA: 2s - loss: 1.0765 - acc: 0.6451Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.0829 - acc: 0.6423
35/35 [==============================] - 73s 2s/step - loss: 1.0819 - acc: 0.6418 - val_loss: 1.1735 - val_acc: 0.6423
Epoch 43/100
34/35 [============================>.] - ETA: 2s - loss: 0.9772 - acc: 0.6601Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.8596 - acc: 0.4818
35/35 [==============================] - 73s 2s/step - loss: 0.9847 - acc: 0.6564 - val_loss: 2.1707 - val_acc: 0.4818
Epoch 44/100
34/35 [============================>.] - ETA: 2s - loss: 1.0427 - acc: 0.6489Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.1683 - acc: 0.5766
35/35 [==============================] - 73s 2s/step - loss: 1.0515 - acc: 0.6445 - val_loss: 1.1957 - val_acc: 0.5766
Epoch 45/100
34/35 [============================>.] - ETA: 1s - loss: 1.1053 - acc: 0.6273Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.3714 - acc: 0.5766
35/35 [==============================] - 72s 2s/step - loss: 1.1103 - acc: 0.6273 - val_loss: 1.2694 - val_acc: 0.5766
Epoch 46/100
34/35 [============================>.] - ETA: 1s - loss: 0.9548 - acc: 0.6807Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.7141 - acc: 0.6350
35/35 [==============================] - 72s 2s/step - loss: 0.9480 - acc: 0.6809 - val_loss: 1.3978 - val_acc: 0.6350
Epoch 47/100
34/35 [============================>.] - ETA: 1s - loss: 0.9886 - acc: 0.6610Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.3880 - acc: 0.6423
35/35 [==============================] - 72s 2s/step - loss: 0.9920 - acc: 0.6600 - val_loss: 1.3634 - val_acc: 0.6423
Epoch 48/100
34/35 [============================>.] - ETA: 2s - loss: 0.9982 - acc: 0.6536Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 0.9810 - acc: 0.6277
35/35 [==============================] - 72s 2s/step - loss: 0.9997 - acc: 0.6527 - val_loss: 0.9857 - val_acc: 0.6277
Epoch 49/100
34/35 [============================>.] - ETA: 2s - loss: 0.9933 - acc: 0.6657Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 0.9095 - acc: 0.7372
35/35 [==============================] - 74s 2s/step - loss: 0.9880 - acc: 0.6655 - val_loss: 0.9696 - val_acc: 0.7372
Epoch 50/100
34/35 [============================>.] - ETA: 2s - loss: 0.9907 - acc: 0.6704Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.0381 - acc: 0.7153
35/35 [==============================] - 73s 2s/step - loss: 0.9974 - acc: 0.6700 - val_loss: 1.0062 - val_acc: 0.7153
Epoch 51/100
34/35 [============================>.] - ETA: 2s - loss: 0.9854 - acc: 0.6742Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.0719 - acc: 0.7080
35/35 [==============================] - 73s 2s/step - loss: 1.0045 - acc: 0.6709 - val_loss: 1.1893 - val_acc: 0.7080
Epoch 52/100
34/35 [============================>.] - ETA: 2s - loss: 0.9621 - acc: 0.6657Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.3255 - acc: 0.6569
35/35 [==============================] - 73s 2s/step - loss: 0.9630 - acc: 0.6664 - val_loss: 1.3467 - val_acc: 0.6569
Epoch 53/100
34/35 [============================>.] - ETA: 2s - loss: 0.9269 - acc: 0.6873Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.1814 - acc: 0.6204
35/35 [==============================] - 73s 2s/step - loss: 0.9256 - acc: 0.6864 - val_loss: 1.1350 - val_acc: 0.6204
Epoch 54/100
34/35 [============================>.] - ETA: 2s - loss: 0.9146 - acc: 0.7097Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.0187 - acc: 0.6934
35/35 [==============================] - 72s 2s/step - loss: 0.9130 - acc: 0.7082 - val_loss: 1.1783 - val_acc: 0.6934
Epoch 55/100
34/35 [============================>.] - ETA: 1s - loss: 0.9772 - acc: 0.6639Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.3702 - acc: 0.5912
35/35 [==============================] - 72s 2s/step - loss: 0.9805 - acc: 0.6636 - val_loss: 1.4025 - val_acc: 0.5912
Epoch 56/100
34/35 [============================>.] - ETA: 2s - loss: 0.9779 - acc: 0.6620Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.0612 - acc: 0.7153
35/35 [==============================] - 73s 2s/step - loss: 0.9779 - acc: 0.6627 - val_loss: 1.0662 - val_acc: 0.7153
Epoch 57/100
34/35 [============================>.] - ETA: 2s - loss: 0.9312 - acc: 0.6835Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.1512 - acc: 0.6569
35/35 [==============================] - 72s 2s/step - loss: 0.9366 - acc: 0.6818 - val_loss: 1.1655 - val_acc: 0.6569
Epoch 58/100
34/35 [============================>.] - ETA: 2s - loss: 0.9734 - acc: 0.6807Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.3821 - acc: 0.6131
35/35 [==============================] - 73s 2s/step - loss: 0.9696 - acc: 0.6791 - val_loss: 1.4092 - val_acc: 0.6131
Epoch 59/100
34/35 [============================>.] - ETA: 2s - loss: 0.9338 - acc: 0.6891Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 0.7915 - acc: 0.6715
35/35 [==============================] - 75s 2s/step - loss: 0.9292 - acc: 0.6900 - val_loss: 0.9977 - val_acc: 0.6715
Epoch 60/100
34/35 [============================>.] - ETA: 2s - loss: 0.8904 - acc: 0.6835Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.1473 - acc: 0.7226
35/35 [==============================] - 73s 2s/step - loss: 0.8898 - acc: 0.6855 - val_loss: 1.0292 - val_acc: 0.7226
Epoch 61/100
34/35 [============================>.] - ETA: 2s - loss: 0.8299 - acc: 0.7219Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.1183 - acc: 0.6350
35/35 [==============================] - 73s 2s/step - loss: 0.8374 - acc: 0.7209 - val_loss: 1.2072 - val_acc: 0.6350
Epoch 62/100
34/35 [============================>.] - ETA: 1s - loss: 0.9176 - acc: 0.6788Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.1959 - acc: 0.6569
35/35 [==============================] - 72s 2s/step - loss: 0.9161 - acc: 0.6791 - val_loss: 1.0525 - val_acc: 0.6569
Epoch 63/100
34/35 [============================>.] - ETA: 1s - loss: 0.8836 - acc: 0.7069Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.0754 - acc: 0.6642
35/35 [==============================] - 72s 2s/step - loss: 0.8927 - acc: 0.7036 - val_loss: 1.1623 - val_acc: 0.6642
Epoch 64/100
34/35 [============================>.] - ETA: 1s - loss: 0.9192 - acc: 0.6976Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 0.8082 - acc: 0.7153
35/35 [==============================] - 73s 2s/step - loss: 0.9126 - acc: 0.6973 - val_loss: 0.9009 - val_acc: 0.7153
Epoch 65/100
34/35 [============================>.] - ETA: 2s - loss: 0.9228 - acc: 0.7022Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 0.9780 - acc: 0.7153
35/35 [==============================] - 73s 2s/step - loss: 0.9260 - acc: 0.7009 - val_loss: 0.9606 - val_acc: 0.7153
Epoch 66/100
34/35 [============================>.] - ETA: 2s - loss: 0.8763 - acc: 0.7013Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.4857 - acc: 0.6642
35/35 [==============================] - 74s 2s/step - loss: 0.8708 - acc: 0.7036 - val_loss: 1.4135 - val_acc: 0.6642
Epoch 67/100
34/35 [============================>.] - ETA: 1s - loss: 0.9728 - acc: 0.6779Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.1551 - acc: 0.6131
35/35 [==============================] - 72s 2s/step - loss: 0.9696 - acc: 0.6782 - val_loss: 1.0894 - val_acc: 0.6131
Epoch 68/100
34/35 [============================>.] - ETA: 2s - loss: 0.9725 - acc: 0.6657Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.2907 - acc: 0.5182
35/35 [==============================] - 73s 2s/step - loss: 0.9692 - acc: 0.6655 - val_loss: 1.3683 - val_acc: 0.5182
Epoch 69/100
34/35 [============================>.] - ETA: 1s - loss: 0.8937 - acc: 0.6882Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 0.9411 - acc: 0.6861
35/35 [==============================] - 72s 2s/step - loss: 0.8871 - acc: 0.6900 - val_loss: 1.1175 - val_acc: 0.6861
Epoch 70/100
34/35 [============================>.] - ETA: 2s - loss: 0.8280 - acc: 0.7210Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.0844 - acc: 0.6715
35/35 [==============================] - 72s 2s/step - loss: 0.8390 - acc: 0.7182 - val_loss: 1.0989 - val_acc: 0.6715
Epoch 71/100
34/35 [============================>.] - ETA: 2s - loss: 0.8644 - acc: 0.7069Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.3255 - acc: 0.7299
35/35 [==============================] - 73s 2s/step - loss: 0.8689 - acc: 0.7064 - val_loss: 1.1450 - val_acc: 0.7299
Epoch 72/100
34/35 [============================>.] - ETA: 1s - loss: 0.8228 - acc: 0.7191Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.3099 - acc: 0.7007
35/35 [==============================] - 72s 2s/step - loss: 0.8289 - acc: 0.7173 - val_loss: 1.1986 - val_acc: 0.7007
Epoch 73/100
34/35 [============================>.] - ETA: 2s - loss: 0.8674 - acc: 0.7135Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.1275 - acc: 0.6277
35/35 [==============================] - 74s 2s/step - loss: 0.8667 - acc: 0.7155 - val_loss: 1.1264 - val_acc: 0.6277
Epoch 74/100
34/35 [============================>.] - ETA: 2s - loss: 0.8121 - acc: 0.7079Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.1918 - acc: 0.6934
35/35 [==============================] - 73s 2s/step - loss: 0.8184 - acc: 0.7091 - val_loss: 1.1011 - val_acc: 0.6934
Epoch 75/100
34/35 [============================>.] - ETA: 2s - loss: 0.8752 - acc: 0.6919Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.0669 - acc: 0.6861
35/35 [==============================] - 73s 2s/step - loss: 0.8733 - acc: 0.6936 - val_loss: 1.2597 - val_acc: 0.6861
Epoch 76/100
34/35 [============================>.] - ETA: 1s - loss: 0.8698 - acc: 0.7097Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 0.9651 - acc: 0.7080
35/35 [==============================] - 72s 2s/step - loss: 0.8681 - acc: 0.7100 - val_loss: 1.0706 - val_acc: 0.7080
Epoch 77/100
34/35 [============================>.] - ETA: 2s - loss: 0.8692 - acc: 0.7135Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.3454 - acc: 0.6350
35/35 [==============================] - 74s 2s/step - loss: 0.8695 - acc: 0.7145 - val_loss: 1.3924 - val_acc: 0.6350
Epoch 78/100
34/35 [============================>.] - ETA: 1s - loss: 0.8529 - acc: 0.7144Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.0595 - acc: 0.7153
35/35 [==============================] - 72s 2s/step - loss: 0.8482 - acc: 0.7182 - val_loss: 1.0585 - val_acc: 0.7153
Epoch 79/100
34/35 [============================>.] - ETA: 2s - loss: 0.8463 - acc: 0.6985Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.7260 - acc: 0.5182
35/35 [==============================] - 75s 2s/step - loss: 0.8449 - acc: 0.6991 - val_loss: 1.8329 - val_acc: 0.5182
Epoch 80/100
34/35 [============================>.] - ETA: 2s - loss: 0.8466 - acc: 0.7322Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.0881 - acc: 0.6934
35/35 [==============================] - 73s 2s/step - loss: 0.8398 - acc: 0.7336 - val_loss: 1.2288 - val_acc: 0.6934
Epoch 81/100
34/35 [============================>.] - ETA: 2s - loss: 0.8468 - acc: 0.6994Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.3585 - acc: 0.6934
35/35 [==============================] - 73s 2s/step - loss: 0.8378 - acc: 0.7027 - val_loss: 1.1146 - val_acc: 0.6934
Epoch 82/100
34/35 [============================>.] - ETA: 2s - loss: 0.8292 - acc: 0.7378Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.1073 - acc: 0.7153
35/35 [==============================] - 74s 2s/step - loss: 0.8225 - acc: 0.7409 - val_loss: 1.0716 - val_acc: 0.7153
Epoch 83/100
34/35 [============================>.] - ETA: 2s - loss: 0.8425 - acc: 0.7247Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.1567 - acc: 0.7080
35/35 [==============================] - 76s 2s/step - loss: 0.8339 - acc: 0.7273 - val_loss: 1.0078 - val_acc: 0.7080
Epoch 84/100
34/35 [============================>.] - ETA: 2s - loss: 0.8584 - acc: 0.7079Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.7647 - acc: 0.7153
35/35 [==============================] - 73s 2s/step - loss: 0.8551 - acc: 0.7091 - val_loss: 1.4710 - val_acc: 0.7153
Epoch 85/100
34/35 [============================>.] - ETA: 2s - loss: 0.7976 - acc: 0.7397Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.1460 - acc: 0.7226
35/35 [==============================] - 73s 2s/step - loss: 0.8004 - acc: 0.7391 - val_loss: 1.0425 - val_acc: 0.7226
Epoch 86/100
34/35 [============================>.] - ETA: 2s - loss: 0.8200 - acc: 0.7238Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.3594 - acc: 0.6934
35/35 [==============================] - 73s 2s/step - loss: 0.8166 - acc: 0.7255 - val_loss: 1.3286 - val_acc: 0.6934
Epoch 87/100
34/35 [============================>.] - ETA: 2s - loss: 0.7688 - acc: 0.7434Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 0.9994 - acc: 0.6715
35/35 [==============================] - 74s 2s/step - loss: 0.7838 - acc: 0.7436 - val_loss: 1.1246 - val_acc: 0.6715
Epoch 88/100
34/35 [============================>.] - ETA: 2s - loss: 0.8146 - acc: 0.7275Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.4037 - acc: 0.6204
35/35 [==============================] - 73s 2s/step - loss: 0.8214 - acc: 0.7264 - val_loss: 1.2520 - val_acc: 0.6204
Epoch 89/100
34/35 [============================>.] - ETA: 2s - loss: 0.7435 - acc: 0.7547Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 1.2014 - acc: 0.6569
35/35 [==============================] - 73s 2s/step - loss: 0.7536 - acc: 0.7491 - val_loss: 1.1717 - val_acc: 0.6569
Epoch 90/100
34/35 [============================>.] - ETA: 2s - loss: 0.7391 - acc: 0.7500Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.1750 - acc: 0.7080
35/35 [==============================] - 74s 2s/step - loss: 0.7452 - acc: 0.7473 - val_loss: 1.1193 - val_acc: 0.7080
Epoch 91/100
34/35 [============================>.] - ETA: 2s - loss: 0.7803 - acc: 0.7228Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 0.9106 - acc: 0.7007
35/35 [==============================] - 72s 2s/step - loss: 0.7850 - acc: 0.7191 - val_loss: 0.9903 - val_acc: 0.7007
Epoch 92/100
34/35 [============================>.] - ETA: 1s - loss: 0.8122 - acc: 0.7378Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.2861 - acc: 0.6715
35/35 [==============================] - 73s 2s/step - loss: 0.8043 - acc: 0.7391 - val_loss: 1.2489 - val_acc: 0.6715
Epoch 93/100
34/35 [============================>.] - ETA: 2s - loss: 0.8318 - acc: 0.7313Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 1.3335 - acc: 0.6058
35/35 [==============================] - 73s 2s/step - loss: 0.8447 - acc: 0.7291 - val_loss: 1.4420 - val_acc: 0.6058
Epoch 94/100
34/35 [============================>.] - ETA: 2s - loss: 0.8097 - acc: 0.7406Epoch 1/100
137/35 [=====================================================================================================================] - 2s 18ms/sample - loss: 1.9343 - acc: 0.5401
35/35 [==============================] - 74s 2s/step - loss: 0.7955 - acc: 0.7482 - val_loss: 2.0021 - val_acc: 0.5401
Epoch 95/100
34/35 [============================>.] - ETA: 2s - loss: 0.7677 - acc: 0.7659Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.2098 - acc: 0.6131
35/35 [==============================] - 73s 2s/step - loss: 0.7740 - acc: 0.7673 - val_loss: 1.3741 - val_acc: 0.6131
Epoch 96/100
34/35 [============================>.] - ETA: 2s - loss: 0.7738 - acc: 0.7397Epoch 1/100
137/35 [=====================================================================================================================] - 2s 14ms/sample - loss: 0.9566 - acc: 0.7664
35/35 [==============================] - 73s 2s/step - loss: 0.7694 - acc: 0.7409 - val_loss: 0.9191 - val_acc: 0.7664
Epoch 97/100
34/35 [============================>.] - ETA: 2s - loss: 0.7043 - acc: 0.7656Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 1.2050 - acc: 0.6715
35/35 [==============================] - 73s 2s/step - loss: 0.7066 - acc: 0.7655 - val_loss: 1.4965 - val_acc: 0.6715
Epoch 98/100
34/35 [============================>.] - ETA: 1s - loss: 0.7963 - acc: 0.7388Epoch 1/100
137/35 [=====================================================================================================================] - 2s 17ms/sample - loss: 0.9127 - acc: 0.7518
35/35 [==============================] - 72s 2s/step - loss: 0.7919 - acc: 0.7418 - val_loss: 1.0027 - val_acc: 0.7518
Epoch 99/100
34/35 [============================>.] - ETA: 2s - loss: 0.7744 - acc: 0.7491Epoch 1/100
137/35 [=====================================================================================================================] - 2s 16ms/sample - loss: 1.7291 - acc: 0.7080
35/35 [==============================] - 73s 2s/step - loss: 0.7736 - acc: 0.7500 - val_loss: 1.3784 - val_acc: 0.7080
Epoch 100/100
34/35 [============================>.] - ETA: 2s - loss: 0.8366 - acc: 0.7238Epoch 1/100
137/35 [=====================================================================================================================] - 2s 15ms/sample - loss: 0.7309 - acc: 0.7007
35/35 [==============================] - 73s 2s/step - loss: 0.8335 - acc: 0.7236 - val_loss: 0.8735 - val_acc: 0.7007
Out[93]:
<tensorflow.python.keras.callbacks.History at 0x18f0806c908>
In [94]:
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()


plt.show()
<Figure size 432x288 with 0 Axes>
In [95]:
results = model.evaluate(X_test, y_test)
print('Accuracy: %f ' % (results[1]*100))
print('Loss: %f' % results[0])
138/138 [==============================] - 2s 16ms/sample - loss: 0.9783 - acc: 0.7101
Accuracy: 71.014494 
Loss: 0.978255
In [ ]:
 
In [96]:
model.save('./Flower_Species_Classifier_CNN_Augmented_100.h5')

model.save_weights('./Flower_Species_Classifier_weights_CNN_Augmented_100.h5')
  • With CNN we have noticed the simplest CNN is able to outperform NN.
  • Adding additional layers and using ImageGenerator for augmenting images gives a boost to the accuracies and it has increased the accuracies to 71%
In [ ]:
 
In [ ]:
 
In [ ]:
 

Transfer Learning

  • Reimporting Data and creatinng data set as jupyter notebook was crashing for whole execution in a single go
In [34]:
from keras.applications import VGG16
#Load the VGG model
vgg_conv = VGG16(weights='F:/GreatLearning/AI/ComputerVision/week 2/Week 2 - CV  - Mentor deck/Case study/data/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5',
                 include_top=False,
                 input_shape=(img_height, img_width, 3))
In [ ]:
 

Re importing data

  • Attempted to reimport data with 224x224x3 and do the model prep but my local system is not able to handle it and hence reduced it back to 100 again
In [74]:
#we can not directly use the image, we have to process the image.
img_height=100
img_width=100
image_size=100
specPath='F:\\GreatLearning\AI\\ComputerVision\\Project\\Flowers-Classification\\17flowers-train\\jpg'


#we can not directly use the image, we have to process the image.

from pathlib import Path
from skimage.io import imread
from keras.preprocessing import image
import cv2 as cv
def load_image_files(container_path):
    image_dir = Path(container_path)
    folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
    categories = [fo.name for fo in folders]

    images = []
    flat_data = []
    target = []
    count = 0
    train_img = []
    label_img = []
    for i, direc in enumerate(folders):
        for file in direc.iterdir():
            count += 1
            img = imread(file)
            #img = cv.cvtColor(img, cv.COLOR_BGR2RGB)
            img_pred = cv.resize(img, (img_height, img_width), interpolation=cv.INTER_AREA)
            img_pred = image.img_to_array(img_pred)
            #img_pred = img_pred / 255
            train_img.append(img_pred)
            label_img.append(categories[i])
            
    X = np.array(train_img)
    y = np.array(label_img)
    return X,y
In [75]:
vggX = []
vggy = []
vggX,vggy = load_image_files(specPath)
In [ ]:
 
Exploring shape of imported data
In [76]:
vggX.shape
Out[76]:
(1375, 100, 100, 3)
In [77]:
vggy.shape
Out[77]:
(1375,)
In [78]:
vggy = np.asarray(vggy).reshape(vggy.shape[0],1)
In [79]:
vggy.shape
Out[79]:
(1375, 1)
In [80]:
#from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import OneHotEncoder


one_hot_encoder = OneHotEncoder(sparse=False)
one_hot_encoder.fit(vggy.reshape(-1, 1))

vggy = one_hot_encoder.transform(vggy.reshape(-1, 1))

print("Shape of y:", vggy.shape)
Shape of y: (1375, 17)
In [81]:
vggX_train, vggX_test,vggy_train, vggy_test = train_test_split(vggX, vggy, random_state=42, test_size=0.2)
In [82]:
vggX_val, vggX_test, vggy_val, vggy_test = train_test_split(vggX_test, vggy_test, random_state=42, test_size=0.5)
In [83]:
#View data set shape
print("X_train: "+str(vggX_train.shape))
print("X_test: "+str(vggX_test.shape))
print("X_val: "+str(vggX_val.shape))
print("y_train: "+str(vggy_train.shape))
print("y_test: "+str(vggy_test.shape))
print("y_val: "+str(vggy_val.shape))
X_train: (1100, 100, 100, 3)
X_test: (138, 100, 100, 3)
X_val: (137, 100, 100, 3)
y_train: (1100, 17)
y_test: (138, 17)
y_val: (137, 17)
In [84]:
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10)) # plot 25 images
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(vggX_train[i]/255, cmap=plt.cm.binary)
    plt.xlabel(vggy_train[i])
In [142]:
# use vgg16 pre-trained model with trainable densely connected output layer

from keras.applications import VGG16
#Load the VGG model
#Loading VGG model from mentors deck as loading online was slowing things down
#Code details captured from mentor deck


vgg_conv = VGG16(weights='F:/GreatLearning/AI/ComputerVision/week 2/Week 2 - CV  - Mentor deck/Case study/data/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5',
                 include_top=False,
                 input_shape=(img_height, img_width, 3)
                )

# Freeze all the layers except for the last layer: 
for layer in vgg_conv.layers[:-4]:
    layer.trainable = False
 
from keras import models
from keras import layers
from keras import optimizers
 
# Create the model
model = models.Sequential()
 
# Add the vgg convolutional base model
model.add(vgg_conv)
 
# Add new layers
model.add(layers.Flatten())
model.add(layers.Dense(1024, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(17, activation='softmax'))
model.summary() 
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
vgg16 (Model)                (None, 3, 3, 512)         14714688  
_________________________________________________________________
flatten_5 (Flatten)          (None, 4608)              0         
_________________________________________________________________
dense_9 (Dense)              (None, 1024)              4719616   
_________________________________________________________________
dropout_5 (Dropout)          (None, 1024)              0         
_________________________________________________________________
dense_10 (Dense)             (None, 17)                17425     
=================================================================
Total params: 19,451,729
Trainable params: 11,816,465
Non-trainable params: 7,635,264
_________________________________________________________________
In [143]:
# image augmentation for train set and image resizing for validation & test
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator( 
      rescale=1./255,
      rotation_range=20,
      width_shift_range=0.2,
      height_shift_range=0.2,
      horizontal_flip=True,
      fill_mode='nearest')
 
validation_datagen = ImageDataGenerator(rescale=1./255) 

train_batchsize = 100
val_batchsize = 10
 
train_generator = train_datagen.flow( 
        vggX_train,vggy_train,
        batch_size=train_batchsize)
 
validation_generator = validation_datagen.flow(
        vggX_val,vggy_val,
        batch_size=val_batchsize,
        shuffle=False)

test_generator = validation_datagen.flow(
        vggX_test,vggy_test,
        batch_size=val_batchsize,
        shuffle=False)
In [ ]:
vggX_test=vggX_test/255
In [144]:
model.compile(loss='categorical_crossentropy',
              optimizer=optimizers.RMSprop(lr=2e-4),
              metrics=['acc'])
In [145]:
history = model.fit_generator(
      train_generator,
      steps_per_epoch=vggX_train.shape[0]/train_generator.batch_size ,
      epochs=nb_epochs,
      validation_data=validation_generator,
      validation_steps=vggX_val.shape[0]/validation_generator.batch_size)
Epoch 1/25
11/11 [==============================] - 89s 8s/step - loss: 2.7875 - acc: 0.1445 - val_loss: 2.3400 - val_acc: 0.2847
Epoch 2/25
11/11 [==============================] - 86s 8s/step - loss: 2.0967 - acc: 0.3427 - val_loss: 2.6819 - val_acc: 0.3504
Epoch 3/25
11/11 [==============================] - 89s 8s/step - loss: 1.7868 - acc: 0.4391 - val_loss: 1.6473 - val_acc: 0.4818
Epoch 4/25
11/11 [==============================] - 91s 8s/step - loss: 1.3787 - acc: 0.5718 - val_loss: 1.6331 - val_acc: 0.6058
Epoch 5/25
11/11 [==============================] - 89s 8s/step - loss: 1.1046 - acc: 0.6755 - val_loss: 1.1195 - val_acc: 0.6496
Epoch 6/25
11/11 [==============================] - 91s 8s/step - loss: 0.9655 - acc: 0.7000 - val_loss: 1.0248 - val_acc: 0.7518
Epoch 7/25
11/11 [==============================] - 93s 8s/step - loss: 0.6295 - acc: 0.8018 - val_loss: 1.5174 - val_acc: 0.7299
Epoch 8/25
11/11 [==============================] - 90s 8s/step - loss: 0.6972 - acc: 0.7755 - val_loss: 1.8386 - val_acc: 0.7080
Epoch 9/25
11/11 [==============================] - 91s 8s/step - loss: 0.5427 - acc: 0.8282 - val_loss: 1.1825 - val_acc: 0.7007
Epoch 10/25
11/11 [==============================] - 93s 8s/step - loss: 0.4797 - acc: 0.8500 - val_loss: 0.9004 - val_acc: 0.7007
Epoch 11/25
11/11 [==============================] - 88s 8s/step - loss: 0.4257 - acc: 0.8791 - val_loss: 1.0635 - val_acc: 0.8175
Epoch 12/25
11/11 [==============================] - 88s 8s/step - loss: 0.5820 - acc: 0.8336 - val_loss: 0.7931 - val_acc: 0.7664
Epoch 13/25
11/11 [==============================] - 86s 8s/step - loss: 0.2830 - acc: 0.9200 - val_loss: 1.3951 - val_acc: 0.8029
Epoch 14/25
11/11 [==============================] - 87s 8s/step - loss: 0.3910 - acc: 0.8864 - val_loss: 1.7862 - val_acc: 0.8175
Epoch 15/25
11/11 [==============================] - 90s 8s/step - loss: 0.2533 - acc: 0.9273 - val_loss: 1.4578 - val_acc: 0.8394
Epoch 16/25
11/11 [==============================] - 91s 8s/step - loss: 0.2452 - acc: 0.9282 - val_loss: 2.1011 - val_acc: 0.8102
Epoch 17/25
11/11 [==============================] - 101s 9s/step - loss: 0.2886 - acc: 0.9236 - val_loss: 1.3366 - val_acc: 0.6423
Epoch 18/25
11/11 [==============================] - 103s 9s/step - loss: 0.1958 - acc: 0.9373 - val_loss: 0.6799 - val_acc: 0.8832
Epoch 19/25
11/11 [==============================] - 97s 9s/step - loss: 0.2517 - acc: 0.9191 - val_loss: 1.0231 - val_acc: 0.7956
Epoch 20/25
11/11 [==============================] - 92s 8s/step - loss: 0.1800 - acc: 0.9527 - val_loss: 0.7955 - val_acc: 0.7956
Epoch 21/25
11/11 [==============================] - 95s 9s/step - loss: 0.1837 - acc: 0.9545 - val_loss: 1.1238 - val_acc: 0.8102
Epoch 22/25
11/11 [==============================] - 94s 9s/step - loss: 0.3043 - acc: 0.9200 - val_loss: 0.3179 - val_acc: 0.8832
Epoch 23/25
11/11 [==============================] - 108s 10s/step - loss: 0.0917 - acc: 0.9673 - val_loss: 0.8100 - val_acc: 0.8686
Epoch 24/25
11/11 [==============================] - 117s 11s/step - loss: 0.1122 - acc: 0.9627 - val_loss: 0.6866 - val_acc: 0.8613
Epoch 25/25
11/11 [==============================] - 109s 10s/step - loss: 0.0953 - acc: 0.9755 - val_loss: 0.8089 - val_acc: 0.7737
In [146]:
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()


plt.show()
<Figure size 432x288 with 0 Axes>
In [ ]:
 
In [147]:
results = model.evaluate(vggX_test, vggy_test)
print('Accuracy: %f ' % (results[1]*100))
print('Loss: %f' % results[0])
138/138 [==============================] - 9s 68ms/step
Accuracy: 80.434781 
Loss: 0.971579
In [95]:
Y_pred_test_cls = (model.predict(vggX_test) > 0.5).astype("int32")

plt.figure(figsize=(2,2))
plt.imshow(vggX_test[30]/255)
plt.show()

#print('Label - one hot encoded: \n',vggy_test_cat.iloc[30] )
print('Actual Label - one hot encoded:  ', vggy_test[30])
print('Predicted Label - one hot encoded: ',Y_pred_test_cls[30] )
Actual Label - one hot encoded:   [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Predicted Label - one hot encoded:  [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
In [96]:
model.save('./Flower_Species_Classifier_VGG16_Augmented_25.h5')

model.save_weights('./Flower_Species_Classifier_weights_VGG16_Augmented_25.h5')
  • With 25 epochs we are getting a accuracy of 80% which is close to CNN's result after 100 epochs
  • continue training the existing model for anothr 100 epochs and we can see immproved results
In [103]:
history = model.fit_generator(
      train_generator,
      steps_per_epoch=vggX_train.shape[0]/train_generator.batch_size ,
      epochs=100,
      validation_data=validation_generator,
      validation_steps=vggX_val.shape[0]/validation_generator.batch_size)
Epoch 1/100
11/11 [==============================] - 102s 9s/step - loss: 0.1648 - acc: 0.9709 - val_loss: 1.3500 - val_acc: 0.8759
Epoch 2/100
11/11 [==============================] - 103s 9s/step - loss: 0.0950 - acc: 0.9764 - val_loss: 1.6532 - val_acc: 0.7737
Epoch 3/100
11/11 [==============================] - 126s 11s/step - loss: 0.1101 - acc: 0.9691 - val_loss: 0.8724 - val_acc: 0.8686
Epoch 4/100
11/11 [==============================] - 115s 10s/step - loss: 0.0659 - acc: 0.9800 - val_loss: 1.9513 - val_acc: 0.7664
Epoch 5/100
11/11 [==============================] - 117s 11s/step - loss: 0.1118 - acc: 0.9718 - val_loss: 0.5044 - val_acc: 0.8321
Epoch 6/100
11/11 [==============================] - 116s 11s/step - loss: 0.0211 - acc: 0.9936 - val_loss: 0.7728 - val_acc: 0.8540
Epoch 7/100
11/11 [==============================] - 119s 11s/step - loss: 0.1001 - acc: 0.9782 - val_loss: 0.9097 - val_acc: 0.8759
Epoch 8/100
11/11 [==============================] - 117s 11s/step - loss: 0.0979 - acc: 0.9755 - val_loss: 1.4693 - val_acc: 0.7080
Epoch 9/100
11/11 [==============================] - 121s 11s/step - loss: 0.2795 - acc: 0.9400 - val_loss: 0.9314 - val_acc: 0.7591
Epoch 10/100
11/11 [==============================] - 119s 11s/step - loss: 0.0480 - acc: 0.9882 - val_loss: 0.4596 - val_acc: 0.8686
Epoch 11/100
11/11 [==============================] - 102s 9s/step - loss: 0.0368 - acc: 0.9873 - val_loss: 0.5243 - val_acc: 0.9124
Epoch 12/100
11/11 [==============================] - 89s 8s/step - loss: 0.0745 - acc: 0.9845 - val_loss: 0.8013 - val_acc: 0.8248
Epoch 13/100
11/11 [==============================] - 88s 8s/step - loss: 0.2192 - acc: 0.9745 - val_loss: 1.5239 - val_acc: 0.6569
Epoch 14/100
11/11 [==============================] - 92s 8s/step - loss: 0.2702 - acc: 0.9364 - val_loss: 0.6441 - val_acc: 0.8832
Epoch 15/100
11/11 [==============================] - 88s 8s/step - loss: 0.0023 - acc: 1.0000 - val_loss: 0.6185 - val_acc: 0.8759
Epoch 16/100
11/11 [==============================] - 88s 8s/step - loss: 0.0040 - acc: 0.9982 - val_loss: 0.9913 - val_acc: 0.8321
Epoch 17/100
11/11 [==============================] - 89s 8s/step - loss: 0.4003 - acc: 0.9400 - val_loss: 0.9491 - val_acc: 0.8832
Epoch 18/100
11/11 [==============================] - 95s 9s/step - loss: 0.0162 - acc: 0.9955 - val_loss: 1.1181 - val_acc: 0.8905
Epoch 19/100
11/11 [==============================] - 119s 11s/step - loss: 0.1062 - acc: 0.9827 - val_loss: 1.6354 - val_acc: 0.8394
Epoch 20/100
11/11 [==============================] - 114s 10s/step - loss: 0.0736 - acc: 0.9800 - val_loss: 0.9891 - val_acc: 0.8686
Epoch 21/100
11/11 [==============================] - 118s 11s/step - loss: 0.0497 - acc: 0.9918 - val_loss: 1.8473 - val_acc: 0.8686
Epoch 22/100
11/11 [==============================] - 122s 11s/step - loss: 0.0959 - acc: 0.9818 - val_loss: 0.7073 - val_acc: 0.8905
Epoch 23/100
11/11 [==============================] - 97s 9s/step - loss: 0.3401 - acc: 0.9355 - val_loss: 0.6620 - val_acc: 0.8905
Epoch 24/100
11/11 [==============================] - 88s 8s/step - loss: 0.0171 - acc: 0.9955 - val_loss: 1.3348 - val_acc: 0.8978
Epoch 25/100
11/11 [==============================] - 88s 8s/step - loss: 0.1331 - acc: 0.9782 - val_loss: 0.7604 - val_acc: 0.8613
Epoch 26/100
11/11 [==============================] - 88s 8s/step - loss: 0.0658 - acc: 0.9855 - val_loss: 0.6660 - val_acc: 0.8686
Epoch 27/100
11/11 [==============================] - 92s 8s/step - loss: 0.0281 - acc: 0.9918 - val_loss: 1.3554 - val_acc: 0.8613
Epoch 28/100
11/11 [==============================] - 95s 9s/step - loss: 0.2924 - acc: 0.9464 - val_loss: 0.7571 - val_acc: 0.8613
Epoch 29/100
11/11 [==============================] - 93s 8s/step - loss: 0.0109 - acc: 0.9991 - val_loss: 1.0635 - val_acc: 0.8832
Epoch 30/100
11/11 [==============================] - 91s 8s/step - loss: 0.0024 - acc: 1.0000 - val_loss: 1.3401 - val_acc: 0.8686
Epoch 31/100
11/11 [==============================] - 98s 9s/step - loss: 0.1151 - acc: 0.9773 - val_loss: 1.7290 - val_acc: 0.7445
Epoch 32/100
11/11 [==============================] - 99s 9s/step - loss: 0.2012 - acc: 0.9500 - val_loss: 1.2221 - val_acc: 0.8613
Epoch 33/100
11/11 [==============================] - 94s 9s/step - loss: 0.0031 - acc: 1.0000 - val_loss: 0.7800 - val_acc: 0.8540
Epoch 34/100
11/11 [==============================] - 91s 8s/step - loss: 0.5271 - acc: 0.9291 - val_loss: 0.4269 - val_acc: 0.8759
Epoch 35/100
11/11 [==============================] - 90s 8s/step - loss: 0.0137 - acc: 0.9982 - val_loss: 0.3539 - val_acc: 0.8613
Epoch 36/100
11/11 [==============================] - 90s 8s/step - loss: 0.0842 - acc: 0.9809 - val_loss: 2.0523 - val_acc: 0.8394
Epoch 37/100
11/11 [==============================] - 88s 8s/step - loss: 0.0022 - acc: 1.0000 - val_loss: 1.6478 - val_acc: 0.8467
Epoch 38/100
11/11 [==============================] - 87s 8s/step - loss: 0.3167 - acc: 0.9500 - val_loss: 1.4777 - val_acc: 0.8759
Epoch 39/100
11/11 [==============================] - 87s 8s/step - loss: 0.0136 - acc: 0.9982 - val_loss: 1.3873 - val_acc: 0.8686
Epoch 40/100
11/11 [==============================] - 86s 8s/step - loss: 0.2694 - acc: 0.9464 - val_loss: 0.4278 - val_acc: 0.8540
Epoch 41/100
11/11 [==============================] - 88s 8s/step - loss: 0.0157 - acc: 0.9964 - val_loss: 0.6389 - val_acc: 0.8978
Epoch 42/100
11/11 [==============================] - 88s 8s/step - loss: 0.0011 - acc: 1.0000 - val_loss: 0.3338 - val_acc: 0.8832
Epoch 43/100
11/11 [==============================] - 91s 8s/step - loss: 0.0065 - acc: 0.9991 - val_loss: 0.8870 - val_acc: 0.7664
Epoch 44/100
11/11 [==============================] - 87s 8s/step - loss: 0.3810 - acc: 0.9491 - val_loss: 0.8856 - val_acc: 0.8978
Epoch 45/100
11/11 [==============================] - 86s 8s/step - loss: 0.0927 - acc: 0.9836 - val_loss: 0.5188 - val_acc: 0.8832
Epoch 46/100
11/11 [==============================] - 87s 8s/step - loss: 0.0030 - acc: 0.9991 - val_loss: 0.2709 - val_acc: 0.9051
Epoch 47/100
11/11 [==============================] - 88s 8s/step - loss: 0.7093 - acc: 0.8973 - val_loss: 0.6547 - val_acc: 0.8540
Epoch 48/100
11/11 [==============================] - 90s 8s/step - loss: 0.0313 - acc: 0.9927 - val_loss: 1.4897 - val_acc: 0.8540
Epoch 49/100
11/11 [==============================] - 89s 8s/step - loss: 0.0031 - acc: 1.0000 - val_loss: 1.0749 - val_acc: 0.8759
Epoch 50/100
11/11 [==============================] - 85s 8s/step - loss: 0.0104 - acc: 0.9964 - val_loss: 1.8695 - val_acc: 0.8832
Epoch 51/100
11/11 [==============================] - 86s 8s/step - loss: 0.0810 - acc: 0.9809 - val_loss: 1.1776 - val_acc: 0.7591
Epoch 52/100
11/11 [==============================] - 88s 8s/step - loss: 0.1791 - acc: 0.9727 - val_loss: 0.4856 - val_acc: 0.9124
Epoch 53/100
11/11 [==============================] - 88s 8s/step - loss: 0.0030 - acc: 0.9982 - val_loss: 0.9225 - val_acc: 0.9051
Epoch 54/100
11/11 [==============================] - 85s 8s/step - loss: 0.2289 - acc: 0.9573 - val_loss: 0.2085 - val_acc: 0.9051
Epoch 55/100
11/11 [==============================] - 86s 8s/step - loss: 0.0029 - acc: 1.0000 - val_loss: 0.1463 - val_acc: 0.8905
Epoch 56/100
11/11 [==============================] - 85s 8s/step - loss: 0.0527 - acc: 0.9936 - val_loss: 0.8631 - val_acc: 0.8394
Epoch 57/100
11/11 [==============================] - 86s 8s/step - loss: 0.0627 - acc: 0.9855 - val_loss: 5.8539 - val_acc: 0.7226
Epoch 58/100
11/11 [==============================] - 86s 8s/step - loss: 0.3671 - acc: 0.9755 - val_loss: 0.1290 - val_acc: 0.8832
Epoch 59/100
11/11 [==============================] - 85s 8s/step - loss: 0.0048 - acc: 0.9982 - val_loss: 1.0084 - val_acc: 0.8540
Epoch 60/100
11/11 [==============================] - 86s 8s/step - loss: 9.5921e-04 - acc: 1.0000 - val_loss: 0.5152 - val_acc: 0.8686
Epoch 61/100
11/11 [==============================] - 85s 8s/step - loss: 0.4692 - acc: 0.9345 - val_loss: 0.5729 - val_acc: 0.8613
Epoch 62/100
11/11 [==============================] - 85s 8s/step - loss: 0.0294 - acc: 0.9955 - val_loss: 0.9533 - val_acc: 0.8613
Epoch 63/100
11/11 [==============================] - 86s 8s/step - loss: 0.0027 - acc: 1.0000 - val_loss: 0.8465 - val_acc: 0.8978
Epoch 64/100
11/11 [==============================] - 85s 8s/step - loss: 0.0012 - acc: 1.0000 - val_loss: 0.6743 - val_acc: 0.8832
Epoch 65/100
11/11 [==============================] - 85s 8s/step - loss: 0.2297 - acc: 0.9655 - val_loss: 0.6803 - val_acc: 0.8102
Epoch 66/100
11/11 [==============================] - 85s 8s/step - loss: 0.1164 - acc: 0.9855 - val_loss: 1.4584 - val_acc: 0.8613
Epoch 67/100
11/11 [==============================] - 86s 8s/step - loss: 0.0151 - acc: 0.9964 - val_loss: 1.8385 - val_acc: 0.8175
Epoch 68/100
11/11 [==============================] - 84s 8s/step - loss: 0.0011 - acc: 1.0000 - val_loss: 1.9613 - val_acc: 0.8540
Epoch 69/100
11/11 [==============================] - 86s 8s/step - loss: 4.2179e-04 - acc: 1.0000 - val_loss: 1.9144 - val_acc: 0.8613
Epoch 70/100
11/11 [==============================] - 85s 8s/step - loss: 6.4841e-05 - acc: 1.0000 - val_loss: 2.3422 - val_acc: 0.8540
Epoch 71/100
11/11 [==============================] - 85s 8s/step - loss: 3.0749e-04 - acc: 1.0000 - val_loss: 1.8566 - val_acc: 0.8613
Epoch 72/100
11/11 [==============================] - 84s 8s/step - loss: 1.1951 - acc: 0.9273 - val_loss: 0.7685 - val_acc: 0.8832
Epoch 73/100
11/11 [==============================] - 85s 8s/step - loss: 0.0255 - acc: 0.9945 - val_loss: 0.6187 - val_acc: 0.8686
Epoch 74/100
11/11 [==============================] - 86s 8s/step - loss: 0.0298 - acc: 0.9927 - val_loss: 1.2922 - val_acc: 0.8613
Epoch 75/100
11/11 [==============================] - 85s 8s/step - loss: 0.0115 - acc: 0.9945 - val_loss: 1.5206 - val_acc: 0.8613
Epoch 76/100
11/11 [==============================] - 85s 8s/step - loss: 3.8946e-04 - acc: 1.0000 - val_loss: 1.2514 - val_acc: 0.8613
Epoch 77/100
11/11 [==============================] - 85s 8s/step - loss: 0.4350 - acc: 0.9709 - val_loss: 7.0203 - val_acc: 0.6131
Epoch 78/100
11/11 [==============================] - 85s 8s/step - loss: 0.6652 - acc: 0.9136 - val_loss: 0.6002 - val_acc: 0.8467
Epoch 79/100
11/11 [==============================] - 85s 8s/step - loss: 0.0245 - acc: 0.9964 - val_loss: 0.4103 - val_acc: 0.8613
Epoch 80/100
11/11 [==============================] - 85s 8s/step - loss: 0.0045 - acc: 0.9991 - val_loss: 1.3125 - val_acc: 0.8613
Epoch 81/100
11/11 [==============================] - 85s 8s/step - loss: 0.0051 - acc: 0.9982 - val_loss: 0.9189 - val_acc: 0.8759
Epoch 82/100
11/11 [==============================] - 84s 8s/step - loss: 0.0297 - acc: 0.9964 - val_loss: 2.2136 - val_acc: 0.8686
Epoch 83/100
11/11 [==============================] - 85s 8s/step - loss: 0.5154 - acc: 0.9464 - val_loss: 0.4283 - val_acc: 0.8540
Epoch 84/100
11/11 [==============================] - 85s 8s/step - loss: 0.0422 - acc: 0.9918 - val_loss: 1.1644 - val_acc: 0.8613
Epoch 85/100
11/11 [==============================] - 85s 8s/step - loss: 0.0041 - acc: 0.9991 - val_loss: 1.5484 - val_acc: 0.8832
Epoch 86/100
11/11 [==============================] - 85s 8s/step - loss: 0.0616 - acc: 0.9891 - val_loss: 1.4879 - val_acc: 0.9051
Epoch 87/100
11/11 [==============================] - 85s 8s/step - loss: 0.0012 - acc: 1.0000 - val_loss: 1.4122 - val_acc: 0.9270
Epoch 88/100
11/11 [==============================] - 85s 8s/step - loss: 0.0021 - acc: 0.9991 - val_loss: 2.8595 - val_acc: 0.8686
Epoch 89/100
11/11 [==============================] - 85s 8s/step - loss: 0.6280 - acc: 0.9436 - val_loss: 0.8323 - val_acc: 0.8394
Epoch 90/100
11/11 [==============================] - 85s 8s/step - loss: 0.0132 - acc: 0.9964 - val_loss: 0.6643 - val_acc: 0.8832
Epoch 91/100
11/11 [==============================] - 85s 8s/step - loss: 0.0064 - acc: 0.9973 - val_loss: 1.2470 - val_acc: 0.8759
Epoch 92/100
11/11 [==============================] - 85s 8s/step - loss: 0.4422 - acc: 0.9627 - val_loss: 0.5296 - val_acc: 0.8832
Epoch 93/100
11/11 [==============================] - 85s 8s/step - loss: 0.0093 - acc: 0.9973 - val_loss: 1.3732 - val_acc: 0.9270
Epoch 94/100
11/11 [==============================] - 86s 8s/step - loss: 0.0016 - acc: 0.9991 - val_loss: 0.7383 - val_acc: 0.9270
Epoch 95/100
11/11 [==============================] - 86s 8s/step - loss: 0.2925 - acc: 0.9609 - val_loss: 0.7694 - val_acc: 0.8832
Epoch 96/100
11/11 [==============================] - 86s 8s/step - loss: 0.0012 - acc: 1.0000 - val_loss: 0.8000 - val_acc: 0.8759
Epoch 97/100
11/11 [==============================] - 85s 8s/step - loss: 0.0401 - acc: 0.9936 - val_loss: 1.3490 - val_acc: 0.9124
Epoch 98/100
11/11 [==============================] - 85s 8s/step - loss: 0.0153 - acc: 0.9945 - val_loss: 0.3804 - val_acc: 0.8686
Epoch 99/100
11/11 [==============================] - 86s 8s/step - loss: 0.1283 - acc: 0.9836 - val_loss: 12.9869 - val_acc: 0.7956
Epoch 100/100
11/11 [==============================] - 84s 8s/step - loss: 0.1828 - acc: 0.9809 - val_loss: 0.0148 - val_acc: 0.9051
In [104]:
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()


plt.show()
<Figure size 432x288 with 0 Axes>
In [137]:
results = model.evaluate(vggX_test, vggy_test)
print('Accuracy: %f ' % (results[1]*100))
print('Loss: %f' % results[0])
138/138 [==============================] - 6s 43ms/step
Accuracy: 89.130437 
Loss: 0.746511
In [106]:
Y_pred_test_cls = (model.predict(vggX_test) > 0.5).astype("int32")

plt.figure(figsize=(2,2))
plt.imshow(vggX_test[30]/255)
plt.show()

#print('Label - one hot encoded: \n',vggy_test_cat.iloc[30] )
print('Actual Label - one hot encoded:  ', vggy_test[30])
print('Predicted Label - one hot encoded: ',Y_pred_test_cls[30] )
Actual Label - one hot encoded:   [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Predicted Label - one hot encoded:  [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
  • With Pretrained Model we can observe the training time is reduced and accuracies are increaed to 90%
  • CNN Models were better compared to NN as they are able to cpature details of foreground and background better and able to classify things.
  • Our Accuracies has improved from 20% in KNN ===> 40% in NN ===> 70% in CNN ===> 90% in Pretrained Models
In [ ]:
 
In [ ]:
 

Saving Model using Keras and Pickel

In [107]:
model.save('./Flower_Species_Classifier_VGG16_Augmented_100.h5')

model.save_weights('./Flower_Species_Classifier_weights_VGG16_Augmented_100.h5')
In [108]:
#Keras suggests to use model.save and model.save_weights but we can pickle model using joblib as well for bigger model if usual pickling fails
from sklearn.externals import joblib 
  
# Save the model as a pickle in a file 
joblib.dump(model, 'Flower_Species_Classifier_VGG16_Augmented_100.pkl') 
  
Out[108]:
['Flower_Species_Classifier_VGG16_Augmented_100.pkl']
In [109]:
# Load the model from the file 
model_joblib = joblib.load('Flower_Species_Classifier_VGG16_Augmented_100.pkl')  
  
In [ ]:
 
In [ ]:
 
In [ ]:
 
  • Creating code to classify single image for UI

Predicting single data

In [129]:
#converting single image to expected format for model prediction
pred_x = np.expand_dims(vggX_test[30], axis=0)
pred_x.shape
Out[129]:
(1, 100, 100, 3)
In [114]:
# Use the loaded model to make predictions 
#model_joblib.predict(X_test)
Y_pred_test_cls = (model_joblib.predict(pred_x) > 0.5).astype("int32")
In [116]:
plt.figure(figsize=(2,2))
plt.imshow(pred_x.reshape(img_height,img_width,3)/255)
plt.show()

#print('Label - one hot encoded: \n',vggy_test_cat.iloc[30] )
print('Actual Label - one hot encoded:  ', vggy_test[30])
print('Predicted Label - one hot encoded: ',Y_pred_test_cls[0] )
Actual Label - one hot encoded:   [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Predicted Label - one hot encoded:  [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
In [121]:
one_hot_encoder.inverse_transform(Y_pred_test_cls[0].reshape(1,-1))
Out[121]:
array([['10']], dtype='<U2')
In [122]:
# Dumping the transformer to an external pickle file
joblib.dump(one_hot_encoder, 'VGG16_CNN_ohe.pkl')
Out[122]:
['VGG16_CNN_ohe.pkl']
In [123]:
#Predicting from Pickle file
In [124]:
pkl_model=joblib.load('./Flower_Species_Classifier_VGG16_Augmented_100.pkl')
In [125]:
# Use the loaded model to make predictions 
#model_joblib.predict(X_test)
Y_pred_test_cls = (pkl_model.predict(pred_x) > 0.5).astype("int32")
In [126]:
plt.figure(figsize=(2,2))
plt.imshow(pred_x.reshape(img_height,img_width,3)/255)
plt.show()

#print('Label - one hot encoded: \n',vggy_test_cat.iloc[30] )
print('Actual Label - one hot encoded:  ', vggy_test[30])
print('Predicted Label - one hot encoded: ',Y_pred_test_cls[0] )
Actual Label - one hot encoded:   [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Predicted Label - one hot encoded:  [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
In [127]:
#Loading ppickeled encoder
ohe_pkl=joblib.load('./VGG16_CNN_ohe.pkl')
In [128]:
ohe_pkl.inverse_transform(Y_pred_test_cls[0].reshape(1,-1))
Out[128]:
array([['10']], dtype='<U2')

Predictinng from Images directly from directory

In [1]:
from PIL import ImageTk, Image
import numpy as np
#from tkinter import filedialog
#import tkinter as tk
import tensorflow
from tensorflow.keras.preprocessing.image import img_to_array
from sklearn.externals import joblib
import matplotlib.pyplot as plt
%matplotlib inline

def predict_img(image_data):
    #root = tk.Tk()
    #image_data = filedialog.askopenfilename(initialdir="/", title="Choose an image",
    #                                   filetypes=(("all files", "*.*"), ("jpg files", "*.jpg"), ("png files", "*.png")))
    
    original = Image.open(image_data)
    plt.figure(figsize = (5,5))
    plt.imshow(original)
    original = original.resize((100, 100), Image.ANTIALIAS)
    numpy_image = img_to_array(original)
    
    #expanding dimensions as model is expecting a array of image not just a single image
    image_batch = np.expand_dims(numpy_image, axis=0)
    processed_image=image_batch/255
    
    #Loading Pickled Model , we could have used Keras approach as well but we are going ahead with Pickle for now
    vgg_cnn_model =joblib.load('./Flower_Species_Classifier_VGG16_Augmented_100.pkl')

    #Loading pickeled encoder for reverse transforming output
    ohe_pkl=joblib.load('./VGG16_CNN_ohe.pkl')
    
    #Using the pickeled model for classification
    predictions = vgg_cnn_model.predict(processed_image)
    
    #inverse transforming the prediction for folder name details
    label = ohe_pkl.inverse_transform(predictions.reshape(1,-1))
    print("Predicted label for the Image: ",label)
    #root.quit()
    #root.destroy()
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.
  warnings.warn(msg, category=FutureWarning)
In [2]:
#test path -->> F:\GreatLearning\AI\ComputerVision\Project\Flowers-Classification\Test\0.jpg
predict_img('F:\\GreatLearning\\AI\\ComputerVision\\Project\\Flowers-Classification\\Test\\0.jpg')
Using TensorFlow backend.
Predicted label for the Image:  [['0']]
In [3]:
predict_img('F:\\GreatLearning\\AI\\ComputerVision\\Project\\Flowers-Classification\\Test\\1.jpg')
Predicted label for the Image:  [['1']]
  • Creating separate jupyter notebook for UI to keep things modular & validate the pickle files can cork independently
In [ ]:
 

Observations

  • For a detailed observations we can refer my existing details on ML Models/NN/CNN variations and we can add the benefits that a pretrained model is bringing to the picture- Copied below from Part 2 of project
  • we have to divide the models into two major groups:

    • Machine learning Algorithms
    • Deep Neural Networks (NN & CNN)
  • Machine Learning Algorithms:

    • these algo's learn their mapping from provided input and output i.e. alog learns a function with diff sets of weight which help in predicting the accurate values.
    • For classification algorithms they learns from the term being used "Decision Boundries"
    • These decision boundries determine wether a new point belongs to which class or groups
    • Decision boundries could vary from linear to non-linear and these algo's are very strong to identify any relationship and map it with proper function and weight.
    • Image classification althoug a classification problem but it has much more details and relationships which ML Decision boundries are not able to map and replicate without very high computation and some times even thats not enough and becomme impossible for usinng these algos.
  • Deep Neural Netowrks ( ANN & CNN ):

    • Deep Neural networks brought in a different concept called Feature Engineering
      • Feature Extraction
      • Feature Selection
    • In feature extraction, we extract all the required features for our problem statement
    • In feature selection, we select the important features that improve the performance of our deep learning model.
    • By this design change in model is giving us huge advantage over Machine learning algorithms for identifying important features of the image and relationships with outputs which helps in categorizing/predictiong classes more accurately.
  • Chalanges for Neural Network:

    • NN amount of weight become unmanagable becuse it uses one perceptron for each input/pixel
    • too many parameters as its fully connected
    • each node is connected to previous and next layer making it very dense and many connections are redundant
    • Translation Invariant - NN behaves differently to shifted version of same image/zoomed/inverted. To make it learn all those you have to feed all varaions of data, which is higly difficult.
    • NN expects an identified object should appear on that specific place only which is never the real world scenario.
    • spatial information is lost when the image is flattened(matrix to vector)
    • This will make image processing difficult as the model will tend to overfit and capture unnecessary relationship
  • CNN Advantages:

    • Convolution:
      • feed forward NN wont see any order in their inputs
      • CNN on other hand is better at dealing with multiple kinds of spatial deformations. It take advantage of local spatial coherence of images.
      • This means that they are able to reduce dramatically the number of operation needed to process an image by using convolution on patches of adjacent pixels, because adjacent pixels together are meaningful.
      • We also call that local connectivity.
      • convolution in neural networks is operation of finding patterns. It has kernel that with which it basically scan an image and place where kernel have 100% match is a place where pattern matched.
    • Pooling layers:
      • downscale the image
      • This is possible because we retain throughout the network, features that are organized spatially like an image, and thus downscaling them makes sense as reducing the size of the image.

Pretrained Models observations compared with DNN/CNN

  • Deep Nueral networks:
    • A DNN works very well for classification and regression but it may not perform well with image classification, as we noticed in our models as well
    • We were not able to improve the model performance significantly even after working with multiple factors like
      • adding more layers
      • diff optimizers
      • no. of epochs
      • amount of data
  • Convolutional Neural Network:
    • Convolution layers are very successful in tasks involving images classification, object identification, face recognition etc.
    • They allow parameter sharing which results in a very optimized network compared to using Dense layers. This reduces the model commplexity as well the training time.
  • Transfer Learning:
    • "Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task."
    • "In transfer learning, we first train a base network on a base dataset and task, and then we repurpose the learned features, or transfer them, to a second target network to be trained on a target dataset and task. This process will tend to work if the features are general, meaning suitable to both base and target tasks, instead of specific to the base task."
    • idea is to use a state of the art model which is already trained on a larger dataset for long time and proven to work well in related task
    • using transfer learning we could train a model which have Test set accuracy of 90% in only 12 mins which is much better when compared to earlier models.
    • We can further increase the accuracy by using
      • More data/augmentation
      • More epochs/training steps
      • Adding more layers
      • More regularization.
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]: